Full node - Bitcoin Wiki

How EpiK Protocol “Saved the Miners” from Filecoin with the E2P Storage Model?

How EpiK Protocol “Saved the Miners” from Filecoin with the E2P Storage Model?

https://preview.redd.it/n5jzxozn27v51.png?width=2222&format=png&auto=webp&s=6cd6bd726582bbe2c595e1e467aeb3fc8aabe36f
On October 20, Eric Yao, Head of EpiK China, and Leo, Co-Founder & CTO of EpiK, visited Deep Chain Online Salon, and discussed “How EpiK saved the miners eliminated by Filecoin by launching E2P storage model”. ‘?” The following is a transcript of the sharing.
Sharing Session
Eric: Hello, everyone, I’m Eric, graduated from School of Information Science, Tsinghua University. My Master’s research was on data storage and big data computing, and I published a number of industry top conference papers.
Since 2013, I have invested in Bitcoin, Ethereum, Ripple, Dogcoin, EOS and other well-known blockchain projects, and have been settling in the chain circle as an early technology-based investor and industry observer with 2 years of blockchain experience. I am also a blockchain community initiator and technology evangelist
Leo: Hi, I’m Leo, I’m the CTO of EpiK. Before I got involved in founding EpiK, I spent 3 to 4 years working on blockchain, public chain, wallets, browsers, decentralized exchanges, task distribution platforms, smart contracts, etc., and I’ve made some great products. EpiK is an answer to the question we’ve been asking for years about how blockchain should be landed, and we hope that EpiK is fortunate enough to be an answer for you as well.
Q & A
Deep Chain Finance:
First of all, let me ask Eric, on October 15, Filecoin’s main website launched, which aroused everyone’s attention, but at the same time, the calls for fork within Filecoin never stopped. The EpiK protocol is one of them. What I want to know is, what kind of project is EpiK Protocol? For what reason did you choose to fork in the first place? What are the differences between the forked project and Filecoin itself?
Eric:
First of all, let me answer the first question, what kind of project is EpiK Protocol.
With the Fourth Industrial Revolution already upon us, comprehensive intelligence is one of the core goals of this stage, and the key to comprehensive intelligence is how to make machines understand what humans know and learn new knowledge based on what they already know. And the knowledge graph scale is a key step towards full intelligence.
In order to solve the many challenges of building large-scale knowledge graphs, the EpiK Protocol was born. EpiK Protocol is a decentralized, hyper-scale knowledge graph that organizes and incentivizes knowledge through decentralized storage technology, decentralized autonomous organizations, and generalized economic models. Members of the global community will expand the horizons of artificial intelligence into a smarter future by organizing all areas of human knowledge into a knowledge map that will be shared and continuously updated for the eternal knowledge vault of humanity
And then, for what reason was the fork chosen in the first place?
EpiK’s project founders are all senior blockchain industry practitioners and have been closely following the industry development and application scenarios, among which decentralized storage is a very fresh application scenario.
However, in the development process of Filecoin, the team found that due to some design mechanisms and historical reasons, the team found that Filecoin had some deviations from the original intention of the project at that time, such as the overly harsh penalty mechanism triggered by the threat to weaken security, and the emergence of the computing power competition leading to the emergence of computing power monopoly by large miners, thus monopolizing the packaging rights, which can be brushed with computing power by uploading useless data themselves.
The emergence of these problems will cause the data environment on Filecoin to get worse and worse, which will lead to the lack of real value of the data in the chain, high data redundancy, and the difficulty of commercializing the project to land.
After paying attention to the above problems, the project owner proposes to introduce multi-party roles and a decentralized collaboration platform DAO to ensure the high value of the data on the chain through a reasonable economic model and incentive mechanism, and store the high-value data: knowledge graph on the blockchain through decentralized storage, so that the lack of value of the data on the chain and the monopoly of large miners’ computing power can be solved to a large extent.
Finally, what differences exist between the forked project and Filecoin itself?
On the basis of the above-mentioned issues, EpiK’s design is very different from Filecoin, first of all, EpiK is more focused in terms of business model, and it faces a different market and track from the cloud storage market where Filecoin is located because decentralized storage has no advantage over professional centralized cloud storage in terms of storage cost and user experience.
EpiK focuses on building a decentralized knowledge graph, which reduces data redundancy and safeguards the value of data in the distributed storage chain while preventing the knowledge graph from being tampered with by a few people, thus making the commercialization of the entire project reasonable and feasible.
From the perspective of ecological construction, EpiK treats miners more friendly and solves the pain point of Filecoin to a large extent, firstly, it changes the storage collateral and commitment collateral of Filecoin to one-time collateral.
Miners participating in EpiK Protocol are only required to pledge 1000 EPK per miner, and only once before mining, not in each sector.
What is the concept of 1000 EPKs, you only need to participate in pre-mining for about 50 days to get this portion of the tokens used for pledging. The EPK pre-mining campaign is currently underway, and it runs from early September to December, with a daily release of 50,000 ERC-20 standard EPKs, and the pre-mining nodes whose applications are approved will divide these tokens according to the mining ratio of the day, and these tokens can be exchanged 1:1 directly after they are launched on the main network. This move will continue to expand the number of miners eligible to participate in EPK mining.
Secondly, EpiK has a more lenient penalty mechanism, which is different from Filecoin’s official consensus, storage and contract penalties, because the protocol can only be uploaded by field experts, which is the “Expert to Person” mode. Every miner needs to be backed up, which means that if one or more miners are offline in the network, it will not have much impact on the network, and the miner who fails to upload the proof of time and space in time due to being offline will only be forfeited by the authorities for the effective computing power of this sector, not forfeiting the pledged coins.
If the miner can re-submit the proof of time and space within 28 days, he will regain the power.
Unlike Filecoin’s 32GB sectors, EpiK’s encapsulated sectors are smaller, only 8M each, which will solve Filecoin’s sector space wastage problem to a great extent, and all miners have the opportunity to complete the fast encapsulation, which is very friendly to miners with small computing power.
The data and quality constraints will also ensure that the effective computing power gap between large and small miners will not be closed.
Finally, unlike Filecoin’s P2P data uploading model, EpiK changes the data uploading and maintenance to E2P uploading, that is, field experts upload and ensure the quality and value of the data on the chain, and at the same time introduce the game relationship between data storage roles and data generation roles through a rational economic model to ensure the stability of the whole system and the continuous high-quality output of the data on the chain.
Deep Chain Finance:
Eric, on the eve of Filecoin’s mainline launch, issues such as Filecoin’s pre-collateral have aroused a lot of controversy among the miners. In your opinion, what kind of impact will Filecoin bring to itself and the whole distributed storage ecosystem after it launches? Do you think that the current confusing FIL prices are reasonable and what should be the normal price of FIL?
Eric:
Filecoin mainnet has launched and many potential problems have been exposed, such as the aforementioned high pre-security problem, the storage resource waste and computing power monopoly caused by unreasonable sector encapsulation, and the harsh penalty mechanism, etc. These problems are quite serious, and will greatly affect the development of Filecoin ecology.
These problems are relatively serious, and will greatly affect the development of Filecoin ecology, here are two examples to illustrate. For example, the problem of big miners computing power monopoly, now after the big miners have monopolized computing power, there will be a very delicate state — — the miners save a file data with ordinary users. There is no way to verify this matter in the chain, whether what he saved is uploaded by himself or someone else. And after the big miners have monopolized computing power, there will be a very delicate state — — the miners will save a file data with ordinary users, there is no way to verify this matter in the chain, whether what he saved is uploaded by himself or someone else. Because I can fake another identity to upload data for myself, but that leads to the fact that for any miner I go to choose which data to save. I have only one goal, and that is to brush my computing power and how fast I can brush my computing power.
There is no difference between saving other people’s data and saving my own data in the matter of computing power. When I save someone else’s data, I don’t know that data. Somewhere in the world, the bandwidth quality between me and him may not be good enough.
The best option is to store my own local data, which makes sense, and that results in no one being able to store data on the chain at all. They only store their own data, because it’s the most economical for them, and the network has essentially no storage utility, no one is providing storage for the masses of retail users.
The harsh penalty mechanism will also severely deplete the miner’s profits, because DDOS attacks are actually a very common attack technique for the attacker, and for a big miner, he can get a very high profit in a short period of time if he attacks other customers, and this thing is a profitable thing for all big miners.
Now as far as the status quo is concerned, the vast majority of miners are actually not very well maintained, so they are not very well protected against these low-DDOS attacks. So the penalty regime is grim for them.
The contradiction between the unreasonable system and the demand will inevitably lead to the evolution of the system in a more reasonable direction, so there will be many forked projects that are more reasonable in terms of mechanism, thus attracting Filecoin miners and a diversion of storage power.
Since each project is in the field of decentralized storage track, the demand for miners is similar or even compatible with each other, so miners will tend to fork the projects with better economic benefits and business scenarios, so as to filter out the projects with real value on the ground.
For the chaotic FIL price, because FIL is also a project that has gone through several years, carrying too many expectations, so it can only be said that the current situation has its own reasons for existence. As for the reasonable price of FIL there is no way to make a prediction because in the long run, it is necessary to consider the commercialization of the project to land and the value of the actual chain of data. In other words, we need to keep observing whether Filecoin will become a game of computing power or a real value carrier.
Deep Chain Finance:
Leo, we just mentioned that the pre-collateral issue of Filecoin caused the dissatisfaction of miners, and after Filecoin launches on the main website, the second round of space race test coins were directly turned into real coins, and the official selling of FIL hit the market phenomenon, so many miners said they were betrayed. What I want to know is, EpiK’s main motto is “save the miners eliminated by Filecoin”, how to deal with the various problems of Filecoin, and how will EpiK achieve “save”?
Leo:
Originally Filecoin’s tacit approval of the computing power makeup behavior was to declare that the official directly chose to abandon the small miners. And this test coin turned real coin also hurt the interests of the loyal big miners in one cut, we do not know why these low-level problems, we can only regret.
EpiK didn’t do it to fork Filecoin, but because EpiK to build a shared knowledge graph ecology, had to integrate decentralized storage in, so the most hardcore Filecoin’s PoRep and PoSt decentralized verification technology was chosen. In order to ensure the quality of knowledge graph data, EpiK only allows community-voted field experts to upload data, so EpiK naturally prevents miners from making up computing power, and there is no reason for the data that has no value to take up such an expensive decentralized storage resource.
With the inability to make up computing power, the difference between big miners and small miners is minimal when the amount of knowledge graph data is small.
We can’t say that we can save the big miners, but we are definitely the optimal choice for the small miners who are currently in the market to be eliminated by Filecoin.
Deep Chain Finance:
Let me ask Eric: According to EpiK protocol, EpiK adopts the E2P model, which allows only experts in the field who are voted to upload their data. This is very different from Filecoin’s P2P model, which allows individuals to upload data as they wish. In your opinion, what are the advantages of the E2P model? If only voted experts can upload data, does that mean that the EpiK protocol is not available to everyone?
Eric:
First, let me explain the advantages of the E2P model over the P2P model.
There are five roles in the DAO ecosystem: miner, coin holder, field expert, bounty hunter and gateway. These five roles allocate the EPKs generated every day when the main network is launched.
The miner owns 75% of the EPKs, the field expert owns 9% of the EPKs, and the voting user shares 1% of the EPKs.
The other 15% of the EPK will fluctuate based on the daily traffic to the network, and the 15% is partly a game between the miner and the field expert.
The first describes the relationship between the two roles.
The first group of field experts are selected by the Foundation, who cover different areas of knowledge (a wide range of knowledge here, including not only serious subjects, but also home, food, travel, etc.) This group of field experts can recommend the next group of field experts, and the recommended experts only need to get 100,000 EPK votes to become field experts.
The field expert’s role is to submit high-quality data to the miner, who is responsible for encapsulating this data into blocks.
Network activity is judged by the amount of EPKs pledged by the entire network for daily traffic (1 EPK = 10 MB/day), with a higher percentage indicating higher data demand, which requires the miner to increase bandwidth quality.
If the data demand decreases, this requires field experts to provide higher quality data. This is similar to a library with more visitors needing more seats, i.e., paying the miner to upgrade the bandwidth.
When there are fewer visitors, more money is needed to buy better quality books to attract visitors, i.e., money for bounty hunters and field experts to generate more quality knowledge graph data. The game between miners and field experts is the most important game in the ecosystem, unlike the game between the authorities and big miners in the Filecoin ecosystem.
The game relationship between data producers and data storers and a more rational economic model will inevitably lead to an E2P model that generates stored on-chain data of much higher quality than the P2P model, and the quality of bandwidth for data access will be better than the P2P model, resulting in greater business value and better landing scenarios.
I will then answer the question of whether this means that the EpiK protocol will not be universally accessible to all.
The E2P model only qualifies the quality of the data generated and stored, not the roles in the ecosystem; on the contrary, with the introduction of the DAO model, the variety of roles introduced in the EpiK ecosystem (which includes the roles of ordinary people) is not limited. (Bounty hunters who can be competent in their tasks) gives roles and possibilities for how everyone can participate in the system in a more logical way.
For example, a miner with computing power can provide storage, a person with a certain domain knowledge can apply to become an expert (this includes history, technology, travel, comics, food, etc.), and a person willing to mark and correct data can become a bounty hunter.
The presence of various efficient support tools from the project owner will lower the barriers to entry for various roles, thus allowing different people to do their part in the system and together contribute to the ongoing generation of a high-quality decentralized knowledge graph.
Deep Chain Finance:
Leo, some time ago, EpiK released a white paper and an economy whitepaper, explaining the EpiK concept from the perspective of technology and economy model respectively. What I would like to ask is, what are the shortcomings of the current distributed storage projects, and how will EpiK protocol be improved?
Leo:
Distributed storage can easily be misunderstood as those of Ali’s OceanDB, but in the field of blockchain, we should focus on decentralized storage first.
There is a big problem with the decentralized storage on the market now, which is “why not eat meat porridge”.
How to understand it? Decentralized storage is cheaper than centralized storage because of its technical principle, and if it is, the centralized storage is too rubbish for comparison.
What incentive does the average user have to spend more money on decentralized storage to store data?
Is it safer?
Existence miners can shut down at any time on decentralized storage by no means save a share of security in Ariadne and Amazon each.
More private?
There’s no difference between encrypted presence on decentralized storage and encrypted presence on Amazon.
Faster?
The 10,000 gigabytes of bandwidth in decentralized storage simply doesn’t compare to the fiber in a centralized server room. This is the root problem of the business model, no one is using it, no one is buying it, so what’s the big vision.
The goal of EpiK is to guide all community participants in the co-construction and sharing of field knowledge graph data, which is the best way for robots to understand human knowledge, and the more knowledge graph data there is, the more knowledge a robot has, the more intelligent it is exponentially, i.e., EpiK uses decentralized storage technology. The value of exponentially growing data is captured with linearly growing hardware costs, and that’s where the buy-in for EPK comes in.
Organized data is worth a lot more than organized hard drives, and there is a demand for EPK when robots have the need for intelligence.
Deep Chain Finance:
Let me ask Leo, how many forked projects does Filecoin have so far, roughly? Do you think there will be more or less waves of fork after the mainnet launches? Have the requirements of the miners at large changed when it comes to participation?
Leo:
We don’t have specific statistics, now that the main network launches, we feel that forking projects will increase, there are so many restricted miners in the market that they need to be organized efficiently.
However, we currently see that most forked projects are simply modifying the parameters of Filecoin’s economy model, which is undesirable, and this level of modification can’t change the status quo of miners making up computing power, and the change to the market is just to make some of the big miners feel more comfortable digging up, which won’t help to promote the decentralized storage ecology to land.
We need more reasonable landing scenarios so that idle mining resources can be turned into effective productivity, pitching a 100x coin instead of committing to one Fomo sentiment after another.
Deep Chain Finance:
How far along is the EpiK Protocol project, Eric? What other big moves are coming in the near future?
Eric:
The development of the EpiK Protocol is divided into 5 major phases.
(a) Phase I testing of the network “Obelisk”.
Phase II Main Network 1.0 “Rosetta”.
Phase III Main Network 2.0 “Hammurabi”.
(a) The Phase IV Enrichment Knowledge Mapping Toolkit.
The fifth stage is to enrich the knowledge graph application ecology.
Currently in the first phase of testing network “Obelisk”, anyone can sign up to participate in the test network pre-mining test to obtain ERC20 EPK tokens, after the mainnet exchange on a one-to-one basis.
We have recently launched ERC20 EPK on Uniswap, you can buy and sell it freely on Uniswap or download our EpiK mobile wallet.
In addition, we will soon launch the EpiK Bounty platform, and welcome all community members to do tasks together to build the EpiK community. At the same time, we are also pushing forward the centralized exchange for token listing.
Users’ Questions
User 1:
Some KOLs said, Filecoin consumed its value in the next few years, so it will plunge, what do you think?
Eric:
First of all, the judgment of the market is to correspond to the cycle, not optimistic about the FIL first judgment to do is not optimistic about the economic model of the project, or not optimistic about the distributed storage track.
First of all, we are very confident in the distributed storage track and will certainly face a process of growth and decline, so as to make a choice for a better project.
Since the existing group of miners and the computing power already produced is fixed, and since EpiK miners and FIL miners are compatible, anytime miners will also make a choice for more promising and economically viable projects.
Filecoin consumes the value of the next few years this time, so it will plunge.
Regarding the market issues, the plunge is not a prediction, in the industry or to keep learning iteration and value judgment. Because up and down market sentiment is one aspect, there will be more very important factors. For example, the big washout in March this year, so it can only be said that it will slow down the development of the FIL community. But prices are indeed unpredictable.
User2:
Actually, in the end, if there are no applications and no one really uploads data, the market value will drop, so what are the landing applications of EpiK?
Leo: The best and most direct application of EpiK’s knowledge graph is the question and answer system, which can be an intelligent legal advisor, an intelligent medical advisor, an intelligent chef, an intelligent tour guide, an intelligent game strategy, and so on.
submitted by EpiK-Protocol to u/EpiK-Protocol [link] [comments]

Test results TkeyNet, release date, and the opening of an office in the UK.

Test results TkeyNet, release date, and the opening of an office in the UK.

https://preview.redd.it/o29r0qu664j51.png?width=700&format=png&auto=webp&s=b34a6841ca97bbd70c9a23fe5ed83813d5f513d1
Hello everyone! Today we will talk about the results of the development of the TkeyNet system and related products, as well as share the test results and release date.
To date, the Protocol has fully debugged. The process of debugging the TkeyNet system is moving with decisive steps — 94.7% of testing of all the functions that TkeyNet includes has completed.
During the tests, the following properties and functions tested with different scenarios:
  • Atomic exchanges;
  • Security;
  • Validation;
  • Interaction of the Protocol with SQLite and PostgreSQL databases;
  • Consensus;
  • Modules for financial institutions;
  • Data exchange in the network;
  • Synchronization;
  • The monitoring system of balance;
  • Transactions that include trades;
  • API;
  • TkeyIndex;
  • TkeyConnect.
Documentation for the TkeyNet system will release on the website tkey.org as well as reviews of the system TkeyNet will be published in the company’s blog.
Back-end TkeyNet developed in the C++ and C) programming languages, to improve performance, some of the code written in C.
The C programming language is the world’s fastest high-level programming language. It is called a high-level assembler, but unlike an assembler, code on it can be compile without changes on any device.
The specified stack is selected to meet the stringent requirements of the financial sector: enhanced security, scalability, and the ability of the system to work 24/7/365. The TkeyNet system is adapted to the world standards: ISO, ISIN.

https://preview.redd.it/x9g2rlz864j51.png?width=700&format=png&auto=webp&s=b2cf80ee5cd722abdb306803d6454e2afde899b0
TkeyNet supports the structured query language and interacts with flexible SQLite and PostgreSQL databases.
PostgreSQL complies with ANSI/ISO SQL standards, and unlike other DBMSs, it has object-oriented functionality, including full support for the ACID concept. An SQLite was selected to improve the speed and performance of operations. SQLite will also serve as an excellent solution for mobile applications that will be created based on TkeyNet.
PostgreSQL is the most advanced RDBMS, focusing primarily on full compliance with standards and extensibility.
During this period, we also completed work on a powerful API. The API is designed for organizations and developers to use the full power of TkeyNet in individual solutions. Software developers, whether mobile apps or local software, will get APIs and client libraries that will simplify the interaction of the corporate market with TkeyNet.
The API provides guaranteed availability, scalable volumes, and responds within milliseconds.

Test results: transaction and operation speed in TkeyNet

To date, testing shows excellent results on the speed of payments made via TkeyNet.
Last week’s results: unlike the first Protocol, where it took at least 10 minutes to validate a transaction, payment transactions in TkeyNet processed in 25 seconds without losing security properties. Performance improved by 2400% compared to Core 1.0

The block generation time in Core 1.0 is at least 10 minutes, and in TkeyNet — 25 seconds according to the test results.
If it took at least 10 minutes to confirm a block, and sometimes it took two or more hours, in TkeyNet developers achieved a stable indicator of 25 seconds without losing security properties.
To achieve our goal of launching the Protocol — we identify possible changes, theorize solutions, model proposals, and test our theories in practice. This process involves a lot of internal discussions and collaboration with external parties who provide feedback on the operation of a particular module, and the entire system as a whole.
Before the launch of TkeyNet, work will carry out to optimize and improve performance. The team plans to increase its performance indicators by 6000% compared to Core 1.0.

The Core 1.0 block generation time is 10 minutes. TkeyNet test network — 25 seconds. TkeyNet main network-up to 10 seconds.
To do this, we will work with individual modules and libraries of databases, transactions, and consensus. The goal for developers is to process payment transactions for up to 10 seconds at most.
The process secure in the logic of consensus, which solves a complex problem in a matter of seconds.

High throughput rates of the TkeyNet Protocol. B2B & B2C sectors.

To meet the needs of companies, startups, and corporations, — TkeyNet responds with fast processing of trades*.
Payment transactions — exchange of currency or shares, exchange operations, interbank settlements, etc.
For a better understanding, let’s analyze the obtained characteristics at the time of testing TkeyNet:

https://preview.redd.it/t31c3kih64j51.png?width=466&format=png&auto=webp&s=daad005d8cd5066e9d3cb67e9f147d0a7b362d97
Results: 1,225,000 trades per block.
https://preview.redd.it/gerxtesi64j51.png?width=472&format=png&auto=webp&s=e0b96995b7d6fbede61fe7761a55005ec20de403

The scaling of the TkeyNet system

The TkeyNet system is easily scalable by increasing the RAM and CPU parameters. We have launched three nodes on powerful servers with the following characteristics:
  • CPU: 72 Core.
  • RAM: 144 Gb.
  • SSD: 120 Gb.
Transaction throughput increased 2.8 times to ≈3,430,000 transactions per block.

https://preview.redd.it/mvot6c3l64j51.png?width=617&format=png&auto=webp&s=23f4ea57f30fd6978568d12948c0ecb571f2f071

Vertical scaling

Unlike Bitcoin and other blockchain systems, where increasing the power of the miner’s hardware does not lead to an increase in network bandwidth.
Usually in such networks, increasing the power is a direct necessity, otherwise, transactions will not take place at all or will take hours, or even days. Because of what actually appears “manipulation of miners” and various types of network attacks.
On practice in such networks, increasing power is a direct necessity otherwise, transactions will not take place at all or will take hours or even days. Therefore of what appears “manipulation of miners” and various types of network attacks.
TkeyNet uses vertical scaling. With increasing node capacity, the throughput of the entire TkeyNet system increases. It turns out that regardless of the number and power of nodes, transactions will take place in 25 seconds, and with increasing power nodes — the number of processed transactions will grow.
As we said above, our developers are working to improve these indicators, where the block validation time will take no more than 10 seconds. Transactions will take place for 1 second to 10 seconds maximum. Also, this parameter significantly increases the volume of trades conducted via TkeyNet.

Front-end. Graphical user interface.


https://preview.redd.it/wrjfa95n64j51.png?width=700&format=png&auto=webp&s=a5137fac7774864fab431fdd7ac51c73ffca3075
During August, the web products team continues working on secure wallets and a blockchain Explorer. The new software is under development, but it has already come a long way from layouts and graphic design and is ready to move to the stage of connecting the TkeyNet back-end with native products. If everything is in the final stage on the back-end side, then the client part (front-end*) needs time to complete debugging. We needed to design applications per best security practices.
\The Front-end developer creates the user interface.* Testing of blockchain applications is not much different from testing regular applications and resembles functional testing of a payment system. Test cases include checking the balance, fields, transaction statuses, and so on.
Also, we conduct continuous testing during development, taking into account the severity and scale of the TkeyNet system that will be used by organizations, corporations, and users, respectively. According to the results of testing and a meeting of the development team, the launch of TkeyNet is agreed for September 25–mid-October, possibly earlier.
Website tkey.org — will be updated in October 2020, taking into account the volume of documentation, packaging all the meanings of the perfect new product TkeyNet. Tkeycoin.com — updated earlier, before launching TkeyNet.
Also, to the upcoming launch of TkeyNet, the management decided to open a representative office of TKEY in the UK for the development of digital Banking and digitalization of assets.
Working with partners in the UK will focus on implementing TKEY’s corporate strategy and market solutions that meet customer needs, create new revenue sources, and provide opportunities for Corporations growth.
We create a great future for people by continuously improving our services and business products. All our actions are concentrate on becoming a leader in the market.
We also remind you that after the launch of TkeyNet, the start date of TKey trading will announce.
The fourth quarter will be hot, get ready!
https://i.redd.it/ynv5xknq64j51.gif
submitted by tkeycoin to Tkeycoin_Official [link] [comments]

Filecoin | Development Status and Mining Progress

Author: Gamals Ahmed, CoinEx Business Ambassador
https://preview.redd.it/5bqakdqgl3g51.jpg?width=865&format=pjpg&auto=webp&s=b709794863977eb6554e3919b9e00ca750e3e704
A decentralized storage network that transforms cloud storage into an account market. Miners obtain the integrity of the original protocol by providing data storage and / or retrieval. On the contrary, customers pay miners to store or distribute data and retrieve it.
Filecoin announced, that there will be more delays before its main network is officially launched.
Filecoin developers postponed the release date of their main network to late July to late August 2020.
As mentioned in a recent announcement, the Filecoin team said that the initiative completed the first round of the internal protocol security audit. Platform developers claim that the results of the review showed that they need to make several changes to the protocol’s code base before performing the second stage of the software testing process.
Created by Protocol Labs, Filecoin was developed using File System (IPFS), which is a peer-to-peer data storage network. Filecoin will allow users to trade storage space in an open and decentralized market.
Filecoin developers implemented one of the largest cryptocurrency sales in 2017. They have privately obtained over $ 200 million from professional or accredited investors, including many institutional investors.
The main network was slated to launch last month, but in February 2020, the Philly Queen development team delayed the release of the main network between July 15 and July 17, 2020.
They claimed that the outbreak of the Coronavirus (COVID-19) in China was the main cause of the delay. The developers now say that they need more time to solve the problems found during a recent codecase audit.
The Filecoin team noted the following:
“We have drafted a number of protocol changes to ensure that building our major network launch is safe and economically sound.” The project developers will add them to two different implementations of Filecoin (Lotus and go-filecoin) in the coming weeks.
Filecoin developers conducted a survey to allow platform community members to cast their votes on three different launch dates for Testnet Phase 2 and mainnet.
The team reported that the community gave their votes. Based on the vote results, the Filecoin team announced a “conservative” estimate that the second phase of the network test should begin by May 11, 2020. The main Filecoin network may be launched sometime between July 20 and August 21, 2020.
The updates to the project can be found on the Filecoin Road Map.
Filecoin developers stated:
“This option will make us get the most important protocol changes first, and then implement the rest as protocol updates during testnet.” Filecoin is back down from the final test stage.
Another filecoin decentralized storage network provider launched its catalytic test network, the final stage of the storage network test that supports the blockchain.
In a blog post on her website, Filecoin said she will postpone the last test round until August. The company also announced a calibration period from July 20 to August 3 to allow miners to test their mining settings and get an idea of how competition conditions affected their rewards.
Filecoin had announced earlier last month that the catalytic testnet test would precede its flagship launch. The delay in the final test also means that the company has returned the main launch window between August 31 and September 21.
Despite the lack of clear incentives for miners and multiple delays, Filecoin has succeeded in attracting huge interest, especially in China. Investors remained highly speculating on the network’s mining hardware and its premium price.
Mining in Filecoin
In most blockchain protocols, “miners” are network participants who do the work necessary to promote and maintain the blockchain. To provide these services, miners are compensated in the original cryptocurrency.
Mining in Filecoin works completely differently — instead of contributing to computational power, miners contribute storage capacity to use for dealing with customers looking to store data.
Filecoin will contain several types of miners:
Storage miners responsible for storing files and data on the network. Miners retrieval, responsible for providing quick tubes for file recovery. Miners repair to be carried out.
Storage miners are the heart of the network. They earn Filecoin by storing data for clients, and computerizing cipher directories to check storage over time. The probability of earning the reward reward and transaction fees is proportional to the amount of storage that the Miner contributes to the Filecoin network, not the hash power.
Retriever miners are the veins of the network. They earn Filecoin by winning bids and mining fees for a specific file, which is determined by the market value of the said file size. Miners bandwidth and recovery / initial transaction response time will determine its ability to close recovery deals on the network.
The maximum bandwidth of the recovery miners will determine the total amount of deals that it can enter into.
In the current implementation, the focus is mostly on storage miners, who sell storage capacity for FIL.

Hardware recommendations

The current system specifications recommended for running the miner are:
Compared to the hardware requirements for running a validity checker, these standards are much higher — although they definitely deserve it. Since these will not increase in the presumed future, the money spent on Filecoin mining hardware will provide users with many years of reliable service, and they pay themselves many times. Think of investing as a small business for cloud storage. To launch a model on the current data hosting model, it will cost millions of dollars in infrastructure and logistics to get started. With Filecoin, you can do the same for a few thousand dollars.
Proceed to mining
Deals are the primary function of the Filecoin network, and it represents an agreement between a client and miners for a “storage” contract.
Once the customer decides to have a miner to store based on the available capacity, duration and price required, he secures sufficient funds in a linked portfolio to cover the total cost of the deal. The deal is then published once the mine accepts the storage agreement. By default, all Filecoin miners are set to automatically accept any deal that meets their criteria, although this can be disabled for miners who prefer to organize their deals manually.
After the deal is published, the customer prepares the data for storage and then transfers it to the miner. Upon receiving all the data, the miner fills in the data in a sector, closes it, and begins to provide proofs to the chain. Once the first confirmation is obtained, the customer can make sure the data is stored correctly, and the deal has officially started.
Throughout the deal, the miner provides continuous proofs to the chain. Clients gradually pay with money they previously closed. If there is missing or late evidence, the miner is punished. More information about this can be found in the Runtime, Cut and Penalties section of this page.
At Filecoin, miners earn two different types of rewards for their efforts: storage fees and reward prevention.
Storage fees are the fees that customers pay regularly after reaching a deal, in exchange for storing data. This fee is automatically deposited into the withdrawal portfolio associated with miners while they continue to perform their duties over time, and is locked for a short period upon receipt.
Block rewards are large sums given to miners calculated on a new block. Unlike storage fees, these rewards do not come from a linked customer; Instead, the new FIL “prints” the network as an inflationary and incentive measure for miners to develop the chain. All active miners on the network have a chance to get a block bonus, their chance to be directly proportional to the amount of storage space that is currently being contributed to the network.
Duration of operation, cutting and penalties
“Slashing” is a feature found in most blockchain protocols, and is used to punish miners who fail to provide reliable uptime or act maliciously against the network.
In Filecoin, miners are susceptible to two different types of cut: storage error cut, unanimously reduce error.
Storage Error Reduction is a term used to include a wider range of penalties, including error fees, sector penalties, and termination fees. Miners must pay these penalties if they fail to provide reliability of the sector or decide to leave the network voluntarily.
An error fee is a penalty that a miner incurs for each non-working day. Sector punishment: A penalty incurred by a miner of a disrupted sector for which no error was reported before the WindowPoSt inspection.
The sector will pay an error fee after the penalty of the sector once the error is discovered.
Termination Fee: A penalty that a miner incurs when a sector is voluntary or involuntarily terminated and removed from the network.
Cutting consensus error is the penalty that a miner incurs for committing consensus errors. This punishment applies to miners who have acted maliciously against the network consensus function.
Filecoin miners
Eight of the top 10 Felticoin miners are Chinese investors or companies, according to the blockchain explorer, while more companies are selling cloud mining contracts and distributed file sharing system hardware. CoinDesk’s Wolfe Chao wrote: “China’s craze for Filecoin may have been largely related to the long-standing popularity of crypto mining in the country overall, which is home to about 65% of the computing power on Bitcoin at discretion.”
With Filecoin approaching the launch of the mainnet blocknet — after several delays since the $ 200 million increase in 2017 — Chinese investors are once again speculating strongly about network mining devices and their premium prices.
Since Protocol Labs, the company behind Filecoin, released its “Test Incentives” program on June 9 that was scheduled to start in a week’s time, more than a dozen Chinese companies have started selling cloud mining contracts and hardware — despite important details such as economics Mining incentives on the main network are still endless.
Sales volumes to date for each of these companies can range from half a million to tens of millions of dollars, according to self-reported data on these platforms that CoinDesk has watched and interviews with several mining hardware manufacturers.
Filecoin’s goal is to build a distributed storage network with token rewards to spur storage hosting as a way to drive wider adoption. Protocol Labs launched a test network in December 2019. But the tokens mined in the testing environment so far are not representative of the true silicon coin that can be traded when the main network is turned on. Moreover, the mining incentive economics on testnet do not represent how final block rewards will be available on the main network.
However, data from Blockecoin’s blocknetin testnet explorers show that eight out of 10 miners with the most effective mining force on testnet are currently Chinese miners.
These eight miners have about 15 petabytes (PB) of effective storage mining power, accounting for more than 85% of the total test of 17.9 petable. For the context, 1 petabyte of hard disk storage = 1000 terabytes (terabytes) = 1 million gigabytes (GB).
Filecoin craze in China may be closely related to the long-standing popularity of crypt mining in the country overall, which is home to about 65% of the computing power on Bitcoin by estimation. In addition, there has been a lot of hype in China about foreign exchange mining since 2018, as companies promote all types of devices when the network is still in development.
“Encryption mining has always been popular in China,” said Andy Tien, co-founder of 1475, one of several mining hardware manufacturers in Philquin supported by prominent Chinese video indicators such as Fenbushi and Hashkey Capital.
“Even though the Velikoyen mining process is more technologically sophisticated, the idea of mining using hard drives instead of specialized machines like Bitcoin ASIC may be a lot easier for retailers to understand,” he said.
Meanwhile, according to Feixiaohao, a Chinese service comparable to CoinMarketCap, nearly 50 Chinese crypto exchanges are often somewhat unknown with some of the more well-known exchanges including Gate.io and Biki — have listed trading pairs for Filecoin currency contracts for USDT.
In bitcoin mining, at the current difficulty level, one segment per second (TH / s) fragmentation rate is expected to generate around 0.000008 BTC within 24 hours. The higher the number of TH / s, the greater the number of bitcoins it should be able to produce proportionately. But in Filecoin, the efficient mining force of miners depends on the amount of data stamped on the hard drive, not the total size of the hard drive.
To close data in the hard drive, the Filecoin miner still needs processing power, i.e. CPU or GPU as well as RAM. More powerful processors with improved software can confine data to the hard drive more quickly, so miners can combine more efficient mining energy faster on a given day.
As of this stage, there appears to be no transparent way at the network level for retail investors to see how much of the purchased hard disk drive was purchased which actually represents an effective mining force.
The U.S.-based Labs Protocol was behind Filecoin’s initial coin offer for 2017, which raised an astonishing $ 200 million.
This was in addition to a $ 50 million increase in private investment supported by notable venture capital projects including Sequoia, Anderson Horowitz and Union Square Ventures. CoinDk’s parent company, CoinDk, has also invested in Protocol Labs.
After rounds of delay, Protocol Protocols said in September 2019 that a testnet launch would be available around December 2019 and the main network would be rolled out in the first quarter of 2020.
The test started as promised, but the main network has been delayed again and is now expected to launch in August 2020. What is Filecoin mining process?
Filecoin mainly consists of three parts: the storage market (the chain), the blockecin Filecoin, and the search market (under the chain). Storage and research market in series and series respectively for security and efficiency. For users, the storage frequency is relatively low, and the security requirements are relatively high, so the storage process is placed on the chain. The retrieval frequency is much higher than the storage frequency when there is a certain amount of data. Given the performance problem in processing data on the chain, the retrieval process under the chain is performed. In order to solve the security issue of payment in the retrieval process, Filecoin adopts the micro-payment strategy. In simple terms, the process is to split the document into several copies, and every time the user gets a portion of the data, the corresponding fee is paid. Types of mines corresponding to Filecoin’s two major markets are miners and warehousers, among whom miners are primarily responsible for storing data and block packages, while miners are primarily responsible for data query. After the stable operation of the major Filecoin network in the future, the mining operator will be introduced, who is the main responsible for data maintenance.
In the initial release of Filecoin, the request matching mechanism was not implemented in the storage market and retrieval market, but the takeover mechanism was adopted. The three main parts of Filecoin correspond to three processes, namely the stored procedure, retrieval process, packaging and reward process. The following figure shows the simplified process and the income of the miners:
The Filecoin mining process is much more complicated, and the important factor in determining the previous mining profit is efficient storage. Effective storage is a key feature that distinguishes Filecoin from other decentralized storage projects. In Filecoin’s EC consensus, effective storage is similar to interest in PoS, which determines the likelihood that a miner will get the right to fill, that is, the proportion of miners effectively stored in the entire network is proportional to final mining revenue.
It is also possible to obtain higher effective storage under the same hardware conditions by improving the mining algorithm. However, the current increase in the number of benefits that can be achieved by improving the algorithm is still unknown.
It seeks to promote mining using Filecoin Discover
Filecoin announced Filecoin Discover — a step to encourage miners to join the Filecoin network. According to the company, Filecoin Discover is “an ever-growing catalog of numerous petabytes of public data covering literature, science, art, and history.” Miners interested in sharing can choose which data sets they want to store, and receive that data on a drive at a cost. In exchange for storing this verified data, miners will earn additional Filecoin above the regular block rewards for storing data. Includes the current catalog of open source data sets; ENCODE, 1000 Genomes, Project Gutenberg, Berkley Self-driving data, more projects, and datasets are added every day.
Ian Darrow, Head of Operations at Filecoin, commented on the announcement:
“Over 2.5 quintillion bytes of data are created every day. This data includes 294 billion emails, 500 million tweets and 64 billion messages on social media. But it is also climatology reports, disease tracking maps, connected vehicle coordinates and much more. It is extremely important that we maintain data that will serve as the backbone for future research and discovery”.
Miners who choose to participate in Filecoin Discover may receive hard drives pre-loaded with verified data, as well as setup and maintenance instructions, depending on the company. The Filecoin team will also host the Slack (fil-Discover-support) channel where miners can learn more.
Filecoin got its fair share of obstacles along the way. Last month Filecoin announced a further delay before its main network was officially launched — after years of raising funds.
In late July QEBR (OTC: QEBR) announced that it had ceded ownership of two subsidiaries in order to focus all of the company’s resources on building blockchain-based mining operations.
The QEBR technology team previously announced that it has proven its system as a Filecoin node valid with CPU, GPU, bandwidth and storage compatibility that meets all IPFS guidelines. The QEBR test system is connected to the main Filecoin blockchain and the already mined filecoin coin has already been tested.
“The disclosure of Sheen Boom and Jihye will allow our team to focus only on the upcoming global launch of Filecoin. QEBR branch, Shenzhen DZD Digital Technology Ltd. (“ DZD “), has a strong background in blockchain development, extraction Data, data acquisition, data processing, data technology research. We strongly believe Filecoin has the potential to be a leading blockchain-based cryptocurrency and will make every effort to make QEBR an important player when Mainecoin mainnet will be launched soon”.
IPFS and Filecoin
Filecoin and IPFS are complementary protocols for storing and sharing data in a decentralized network. While users are not required to use Filecoin and IPFS together, the two combined are working to resolve major failures in the current web infrastructure.
IPFS
It is an open source protocol that allows users to store and transmit verifiable data with each other. IPFS users insist on data on the network by installing it on their own device, to a third-party cloud service (known as Pinning Services), or through community-oriented systems where a group of individual IPFS users share resources to ensure the content stays live.
The lack of an integrated catalytic mechanism is the challenge Filecoin hopes to solve by allowing users to catalyze long-term distributed storage at competitive prices through the storage contract market, while maintaining the efficiency and flexibility that the IPFS network provides.
Using IPFS
In IPFS, the data is hosted by the required data installation nodes. For data to persist while the user node is offline, users must either rely on their other peers to install their data voluntarily or use a central install service to store data.
Peer-to-peer reliance caching data may be a good thing as one or multiple organizations share common files on an internal network, or where strong social contracts can be used to ensure continued hosting and preservation of content in the long run. Most users in an IPFS network use an installation service.
Using Filecoin
The last option is to install your data in a decentralized storage market, such as Filecoin. In Filecoin’s structure, customers make regular small payments to store data when a certain availability, while miners earn those payments by constantly checking the integrity of this data, storing it, and ensuring its quick recovery. This allows users to motivate Filecoin miners to ensure that their content will be live when it is needed, a distinct advantage of relying only on other network users as required using IPFS alone.
Filecoin, powered by IPFS
It is important to know that Filecoin is built on top of IPFS. Filecoin aims to be a very integrated and seamless storage market that takes advantage of the basic functions provided by IPFS, they are connected to each other, but can be implemented completely independently of each other. Users do not need to interact with Filecoin in order to use IPFS.
Some advantages of sharing Filecoin with IPFS:
Of all the decentralized storage projects, Filecoin is undoubtedly the most interested, and IPFS has been running stably for two years, fully demonstrating the strength of its core protocol.
Filecoin’s ability to obtain market share from traditional central storage depends on end-user experience and storage price. Currently, most Filecoin nodes are posted in the IDC room. Actual deployment and operation costs are not reduced compared to traditional central cloud storage, and the storage process is more complicated.
PoRep and PoSt, which has a large number of proofs of unknown operation, are required to cause the actual storage cost to be so, in the early days of the release of Filecoin. The actual cost of storing data may be higher than the cost of central cloud storage, but the initial storage node may reduce the storage price in order to obtain block rewards, which may result in the actual storage price lower than traditional central cloud storage.
In the long term, Filecoin still needs to take full advantage of its P2P storage, convert storage devices from specialization to civil use, and improve its algorithms to reduce storage costs without affecting user experience. The storage problem is an important problem to be solved in the blockchain field, so a large number of storage projects were presented at the 19th Web3 Summit. IPFS is an important part of Web3 visibility. Its development will affect the development of Web3 to some extent. Likewise, Web3 development somewhat determines the future of IPFS. Filecoin is an IPFS-based storage class project initiated by IPFS. There is no doubt that he is highly expected.
Resources :
  1. https://www.coindesk.com/filecoin-pushes-back-final-testing-phase-announces-calibration-period-for-miners
  2. https://docs.filecoin.io/mine/#types-of-miners https://www.nasdaq.com/articles/inside-the-craze-for-filecoin-crypto-mining-in-china-2020-07-12؟amp
  3. https://www.prnewswire.com/news-releases/qebr-streamlines-holdings-to-concentrate-on-filecoin-development-and-mining-301098731.html
  4. https://www.crowdfundinsider.com/2020/05/161200-filecoin-seeks-to-boost-mining-with-filecoin-discove
  5. https://zephyrnet.com/filecoin-seeks-to-boost-mining-with-filecoin-discove
  6. https://docs.filecoin.io/introduction/ipfs-and-filecoin/#filecoin-powered-by-ipfs
submitted by CoinEx_Institution to filecoin [link] [comments]

DFINITY Research Report

DFINITY Research Report
Author: Gamals Ahmed, CoinEx Business Ambassador
ABSTRACT
The DFINITY blockchain computer provides a secure, performant and flexible consensus mechanism. At its core, DFINITY contains a decentralized randomness beacon, which acts as a verifiable random function (VRF) that produces a stream of outputs over time. The novel technique behind the beacon relies on the existence of a unique-deterministic, non-interactive, DKG-friendly threshold signatures scheme. The only known examples of such a scheme are pairing-based and derived from BLS.
The DFINITY blockchain is layered on top of the DFINITY beacon and uses the beacon as its source of randomness for leader selection and leader ranking. A “weight” is attributed to a chain based on the ranks of the leaders who propose the blocks in the chain, and that weight is used to select between competing chains. The DFINITY blockchain is layered on top of the DFINITY beacon and uses the beacon as its source of randomness for leader selection and leader ranking blockchain is further hardened by a notarization process which dramatically improves the time to finality and eliminates the nothing-at-stake and selfish mining attacks.
DFINITY consensus algorithm is made to scale through continuous quorum selections driven by the random beacon. In practice, DFINITY achieves block times of a few seconds and transaction finality after only two confirmations. The system gracefully handles temporary losses of network synchrony including network splits, while it is provably secure under synchrony.

1.INTRODUCTION

DFINITY is building a new kind of public decentralized cloud computing resource. The company’s platform uses blockchain technology which is aimed at building a new kind of public decentralized cloud computing resource with unlimited capacity, performance and algorithmic governance shared by the world, with the capability to power autonomous self-updating software systems, enabling organizations to design and deploy custom-tailored cloud computing projects, thereby reducing enterprise IT system costs by 90%.
DFINITY aims to explore new territory and prove that the blockchain opportunity is far broader and deeper than anyone has hitherto realized, unlocking the opportunity with powerful new crypto.
Although a standalone project, DFINITY is not maximalist minded and is a great supporter of Ethereum.
The DFINITY blockchain computer provides a secure, performant and flexible consensus mechanism. At its core, DFINITY contains a decentralized randomness beacon, which acts as a verifiable random function (VRF) that produces a stream of outputs over time. The novel technique behind the beacon relies on the existence of a unique-deterministic, non-interactive, DKG-friendly threshold signatures scheme. The only known examples of such a scheme are pairing-based and derived from BLS.
DFINITY’s consensus mechanism has four layers: notary (provides fast finality guarantees to clients and external observers), blockchain (builds a blockchain from validated transactions via the Probabilistic Slot Protocol driven by the random beacon), random beacon (provides the source of randomness for all higher layers like smart contract applications), and identity (provides a registry of all clients).
DFINITY’s consensus mechanism has four layers

Figure1: DFINITY’s consensus mechanism layers
1. Identity layer:
Active participants in the DFINITY Network are called clients. Where clients are registered with permanent identities under a pseudonym. Moreover, DFINITY supports open membership by providing a protocol for registering new clients by depositing a stake with an insurance period. This is the responsibility of the first layer.
2. Random Beacon layer:
Provides the source of randomness (VRF) for all higher layers including ap- plications (smart contracts). The random beacon in the second layer is an unbiasable, verifiable random function (VRF) that is produced jointly by registered clients. Each random output of the VRF is unpredictable by anyone until just before it becomes avail- able to everyone. This is a key technology of the DFINITY system, which relies on a threshold signature scheme with the properties of uniqueness and non-interactivity.

https://preview.redd.it/hkcf53ic05e51.jpg?width=441&format=pjpg&auto=webp&s=44d45c9602ee630705ce92902b8a8379201d8111
3. Blockchain layer:
The third layer deploys the “probabilistic slot protocol” (PSP). This protocol ranks the clients for each height of the chain, in an order that is derived determin- istically from the unbiased output of the random beacon for that height. A weight is then assigned to block proposals based on the proposer’s rank such that blocks from clients at the top of the list receive a higher weight. Forks are resolved by giving favor to the “heaviest” chain in terms of accumulated block weight — quite sim- ilar to how traditional proof-of-work consensus is based on the highest accumulated amount of work.
The first advantage of the PSP protocol is that the ranking is available instantaneously, which allows for a predictable, constant block time. The second advantage is that there is always a single highest-ranked client, which allows for a homogenous network bandwidth utilization. Instead, a race between clients would favor a usage in bursts.
4. Notarization layer:
Provides fast finality guarantees to clients and external observers. DFINITY deploys the novel technique of block notarization in its fourth layer to speed up finality. A notarization is a threshold signature under a block created jointly by registered clients. Only notarized blocks can be included in a chain. Of all RSA-based alternatives exist but suffer from an impracticality of setting up the thresh- old keys without a trusted dealer.
DFINITY achieves its high speed and short block times exactly because notarization is not full consensus.
DFINITY does not suffer from selfish mining attack or a problem nothing at stake because the authentication step is impossible for the opponent to build and maintain a series of linked and trusted blocks in secret.
DFINITY’s consensus is designed to operate on a network of millions of clients. To en- able scalability to this extent, the random beacon and notarization protocols are designed such as that they can be safely and efficiently delegated to a committee

1.1 OVERVIEW ABOUT DFINITY

DFINITY is a blockchain-based cloud-computing project that aims to develop an open, public network, referred to as the “internet computer,” to host the next generation of software and data. and it is a decentralized and non-proprietary network to run the next generation of mega-applications. It dubbed this public network “Cloud 3.0”.
DFINITY is a third generation virtual blockchain network that sets out to function as an “intelligent decentralised cloud,”¹ strongly focused on delivering a viable corporate cloud solution. The DFINITY project is overseen, supported and promoted by DFINITY Stiftung a not-for-profit foundation based in Zug, Switzerland.
DFINITY is a decentralized network design whose protocols generate a reliable “virtual blockchain computer” running on top of a peer-to-peer network upon which software can be installed and can operate in the tamperproof mode of smart contracts.
DFINITY introduces algorithmic governance in the form of a “Blockchain Nervous System” that can protect users from attacks and help restart broken systems, dynamically optimize network security and efficiency, upgrade the protocol and mitigate misuse of the platform, for example by those wishing to run illegal or immoral systems.
DFINITY is an Ethereum-compatible smart contract platform that is implementing some revolutionary ideas to address blockchain performance, scaling, and governance. Whereas
DFINITY could pose a credible threat to Ethereum’s extinction, the project is pursuing a coevolutionary strategy by contributing funding and effort to Ethereum projects and freely offering their technology to Ethereum for adoption. DFINITY has labeled itself Ethereum’s “crazy sister” to express it’s close genetic resemblance to Ethereum, differentiated by its obsession with performance and neuron-inspired governance model.
Dfinity raised $61 million from Andreesen Horowitz and Polychain Capital in a February 2018 funding round. At the time, Dfinity said it wanted to create an “internet computer” to cut the costs of running cloud-based business applications. A further $102 million funding round in August 2018 brought the project’s total funding to $195 million.
In May 2018, Dfinity announced plans to distribute around $35 million worth of Dfinity tokens in an airdrop. It was part of the company’s plan to create a “Cloud 3.0.” Because of regulatory concerns, none of the tokens went to US residents.
DFINITY be broadening and strengthening the EVM ecosystem by giving applications a choice of platforms with different characteristics. However, if DFINITY succeeds in delivering a fully EVM-compatible smart contract platform with higher transaction throughput, faster confirmation times, and governance mechanisms that can resolve public disputes without causing community splits, then it will represent a clearly superior choice for deploying new applications and, as its network effects grow, an attractive place to bring existing ones. Of course the challenge for DFINITY will be to deliver on these promises while meeting the security demands of a public chain with significant value at risk.

1.1.1 DFINITY FUTURE

  • DFINITY aims to explore new blockchain territory related to the original goals of the Ethereum project and is sometimes considered “Ethereum’s crazy sister.”
  • DFINITY is developing blockchain-based infrastructure to support a new style of the internet (akin to Ethereum’s “World Computer”), one in which the internet itself will support software applications and data rather than various cloud hosting providers.
  • The project suggests this reinvented software platform can simplify the development of new software systems, reduce the human capital needed to maintain and secure data, and preserve user data privacy.
  • Dfinity aims to reduce the costs of cloud services by creating a decentralized “internet computer” which may launch in 2020
  • Dfinity claims transactions on its network are finalized in 3–5 seconds, compared to 1 hour for Bitcoin and 10 minutes for Ethereum.

1.1.2 DFINITY’S VISION

DFINITY’s vision is its new internet infrastructure can support a wide variety of end-user and enterprise applications. Social media, messaging, search, storage, and peer-to-peer Internet interactions are all examples of functionalities that DFINITY plans to host atop its public Web 3.0 cloud-like computing resource. In order to provide the transaction and data capacity necessary to support this ambitious vision, DFINITY features a unique consensus model (dubbed Threshold Relay) and algorithmic governance via its Blockchain Nervous System (BNS) — sometimes also referred to as the Network Nervous System or NNS.

1.2 DFINITY COMMUNITY

The DFINITY community brings people and organizations together to learn and collaborate on products that help steward the next-generation of internet software and services. The Internet Computer allows developers to take on the monopolization of the internet, and return the internet back to its free and open roots. We’re committed to connecting those who believe the same through our events, content, and discussions.

https://preview.redd.it/0zv64fzf05e51.png?width=637&format=png&auto=webp&s=e2b17365fae3c679a32431062d8e3c00a57673cf

1.3 DFINITY ROADMAP (TIMELINE) February 15, 2017

February 15, 2017
Ethereum based community seed round raises 4M Swiss francs (CHF)
The DFINITY Stiftung, a not-for-profit foundation entity based in Zug, Switzerland, raised the round. The foundation held $10M of assets as of April 2017.
February 8, 2018
Dfinity announces a $61M fundraising round led by Polychain Capital and Andreessen Horowitz
The round $61M round led by Polychain Capital and Andreessen Horowitz, along with an DFINITY Ecosystem Venture Fund which will be used to support projects developing on the DFINITY platform, and an Ethereum based raise in 2017 brings the total funding for the project over $100 million. This is the first cryptocurrency token that Andressen Horowitz has invested in, led by Chris Dixon.
August 2018
Dfinity raises a $102,000,000 venture round from Multicoin Capital, Village Global, Aspect Ventures, Andreessen Horowitz, Polychain Capital, Scalar Capital, Amino Capital and SV Angel.
January 23, 2020
Dfinity launches an open source platform aimed at the social networking giants

2.DFINITY TECHNOLOGY

Dfinity is building what it calls the internet computer, a decentralized technology spread across a network of independent data centers that allows software to run anywhere on the internet rather than in server farms that are increasingly controlled by large firms, such as Amazon Web Services or Google Cloud. This week Dfinity is releasing its software to third-party developers, who it hopes will start making the internet computer’s killer apps. It is planning a public release later this year.
At its core, the DFINITY consensus mechanism is a variation of the Proof of Stake (PoS) model, but offers an alternative to traditional Proof of Work (PoW) and delegated PoS (dPoS) networks. Threshold Relay intends to strike a balance between inefficiencies of decentralized PoW blockchains (generally characterized by slow block times) and the less robust game theory involved in vote delegation (as seen in dPoS blockchains). In DFINITY, a committee of “miners” is randomly selected to add a new block to the chain. An individual miner’s probability of being elected to the committee proposing and computing the next block (or blocks) is proportional to the number of dfinities the miner has staked on the network. Further, a “weight” is attributed to a DFINITY chain based on the ranks of the miners who propose blocks in the chain, and that weight is used to choose between competing chains (i.e. resolve chain forks).
A decentralized random beacon manages the random selection process of temporary block producers. This beacon is a Variable Random Function (VRF), which is a pseudo-random function that provides publicly verifiable proofs of its outputs’ correctness. A core component of the random beacon is the use of Boneh-Lynn-Shacham (BLS) signatures. By leveraging the BLS signature scheme, the DFINITY protocol ensures no actor in the network can determine the outcome of the next random assignment.
Dfinity is introducing a new standard, which it calls the internet computer protocol (ICP). These new rules let developers move software around the internet as well as data. All software needs computers to run on, but with ICP the computers could be anywhere. Instead of running on a dedicated server in Google Cloud, for example, the software would have no fixed physical address, moving between servers owned by independent data centers around the world. “Conceptually, it’s kind of running everywhere,” says Dfinity engineering manager Stanley Jones.
DFINITY also features a native programming language, called ActorScript (name may be subject to change), and a virtual machine for smart contract creation and execution. The new smart contract language is intended to simplify the management of application state for programmers via an orthogonal persistence environment (which means active programs are
not required to retrieve or save their state). All ActorScript contracts are eventually compiled down to WebAssembly instructions so the DFINITY virtual machine layer can execute the logic of applications running on the network. The advantage of using the WebAssembly standard is that all major browsers support it and a variety of programming languages can compile down to Wasm (not just ActorScript).
Dfinity is moving fast. Recently, Dfinity showed off a TikTok clone called CanCan. In January it demoed a LinkedIn-alike called LinkedUp. Neither app is being made public, but they make a convincing case that apps made for the internet computer can rival the real things.

2.1 DFINITY CORE APPLICATIONS

The DFINITY cloud has two core applications:
  1. Enabling the re-engineering of business: DFINITY ambitiously aims to facilitate the re-engineering of mass-market services (such as Web Search, Ridesharing Services, Messaging Services, Social Media, Supply Chain, etc) into open source businesses that leverage autonomous software and decentralised governance systems to operate and update themselves more efficiently.
  2. Enable the re-engineering of enterprise IT systems to reduce costs: DFINITY seeks to re-engineer enterprise IT systems to take advantage of the unique properties that blockchain computer networks provide.
At present, computation on blockchain-based computer networks is far more expensive than traditional, centralised solutions (Amazon Web Services, Microsoft Azure, Google Cloud Platform, etc). Despite increasing computational cost, DFINITY intends to lower net costs “by 90% or more” through reducing the human capital cost associated with sustaining and supporting these services.
Whilst conceptually similar to Ethereum, DFINITY employs original and new cryptography methods and protocols (crypto:3) at the network level, in concert with AI and network-fuelled systemic governance (Blockchain Nervous System — BNS) to facilitate Corporate adoption.
DFINITY recognises that different users value different properties and sees itself as more of a fully compatible extension of the Ethereum ecosystem rather than a competitor of the Ethereum network.
In the future, DFINITY hopes that much of their “new crypto might be used within the Ethereum network and are also working hard on shared technology components.”
As the DFINITY project develops over time, the DFINITY Stiftung foundation intends to steadily increase the BNS’ decision-making responsibilities over time, eventually resulting in the dissolution of its own involvement entirely, once the BNS is sufficiently sophisticated.
DFINITY consensus mechanism is a heavily optimized proof of stake (PoS) model. It places a strong emphasis on transaction finality through implementing a Threshold Relay technique in conjunction with the BLS signature scheme and a notarization method to address many of the problems associated with PoS consensus.

2.2 THRESHOLD RELAY

As a public cloud computing resource, DFINITY targets business applications by substantially reducing cloud computing costs for IT systems. They aim to achieve this with a highly scalable and powerful network with potentially unlimited capacity. The DFINITY platform is chalk full of innovative designs and features like their Blockchain Nervous System (BNS) for algorithmic governance.
One of the primary components of the platform is its novel Threshold Relay Consensus model from which randomness is produced, driving the other systems that the network depends on to operate effectively. The consensus system was first designed for a permissioned participation model but can be paired with any method of Sybil resistance for an open participation model.
“The Threshold Relay is the mechanism by which Dfinity randomly samples replicas into groups, sets the groups (committees) up for threshold operation, chooses the current committee, and relays from one committee to the next is called the threshold relay.”
Threshold Relay consists of four layers (As mentioned previously):
  1. Notary layer, which provides fast finality guarantees to clients and external observers and eliminates nothing-at-stake and selfish mining attacks, providing Sybil attack resistance.
  2. Blockchain layer that builds a blockchain from validated transactions via the Probabilistic Slot Protocol driven by the random beacon.
  3. Random beacon, which as previously covered, provides the source of randomness for all higher layers like the blockchain layer smart contract applications.
  4. Identity layer that provides a registry of all clients.

2.2.1 HOW DOES THRESHOLD RELAY WORK?

Threshold Relay produces an endogenous random beacon, and each new value defines random group(s) of clients that may independently try and form into a “threshold group”. The composition of each group is entirely random such that they can intersect and clients can be presented in multiple groups. In DFINITY, each group is comprised of 400 members. When a group is defined, the members attempt to set up a BLS threshold signature system using a distributed key generation protocol. If they are successful within some fixed number of blocks, they then register the public key (“identity”) created for their group on the global blockchain using a special transaction, such that it will become part of the set of active groups in a following “epoch”. The network begins at “genesis” with some number of predefined groups, one of which is nominated to create a signature on some default value. Such signatures are random values — if they were not then the group’s signatures on messages would be predictable and the threshold signature system insecure — and each random value produced thus is used to select a random successor group. This next group then signs the previous random value to produce a new random value and select another group, relaying between groups ad infinitum and producing a sequence of random values.
In a cryptographic threshold signature system a group can produce a signature on a message upon the cooperation of some minimum threshold of its members, which is set to 51% in the DFINITY network. To produce the threshold signature, group members sign the message
individually (here the preceding group’s threshold signature) creating individual “signature shares” that are then broadcast to other group members. The group threshold signature can be constructed upon combination of a sufficient threshold of signature shares. So for example, if the group size is 400, if the threshold is set at 201 any client that collects that many shares will be able to construct the group’s signature on the message. Other group members can validate each signature share, and any client using the group’s public key can validate the single group threshold signature produced by combining them. The magic of the BLS scheme is that it is “unique and deterministic” meaning that from whatever subset of group members the required number of signature shares are collected, the single threshold signature created is always the same and only a single correct value is possible.
Consequently, the sequence of random values produced is entirely deterministic and unmanipulable, and signatures generated by relaying between groups produces a Verifiable Random Function, or VRF. Although the sequence of random values is pre-determined given some set of participating groups, each new random value can only be produced upon the minimal agreement of a threshold of the current group. Conversely, in order for relaying to stall because a random number was not produced, the number of correct processes must be below the threshold. Thresholds are configured so that this is extremely unlikely. For example, if the group size is set to 400, and the threshold is 201, 200 or more of the processes must become faulty to prevent production. If there are 10,000 processes in the network, of which 3,000 are faulty, the probability this will occur is less than 10e-17.

2.3 DFINITY TOKEN

The DFINITY blockchain also supports a native token, called dfinities (DFN), which perform multiple roles within the network, including:
  1. Fuel for deploying and running smart contracts.
  2. Security deposits (i.e. staking) that enable participation in the BNS governance system.
  3. Security deposits that allow client software or private DFINITY cloud networks to connect to the public network.
Although dfinities will end up being assigned a value by the market, the DFINITY team does not intend for DFN to act as a currency. Instead, the project has envisioned PHI, a “next-generation” crypto-fiat scheme, to act as a stable medium of exchange within the DFINITY ecosystem.
Neuron operators can earn Dfinities by participating in network-wide votes, which could be concerning protocol upgrades, a new economic policy, etc. DFN rewards for participating in the governance system are proportional to the number of tokens staked inside a neuron.

2.4 SCALABILITY

DFINITY is constantly developing with a structure that separates consensus, validation, and storage into separate layers. The storage layer is divided into multiple strings, each of which is responsible for processing transactions that occur in the fragment state. The verification layer is responsible for combining hashes of all fragments in a Merkle-like structure that results in a global state fractionation that is stored in blocks in the top-level chain.

2.5 DFINITY CONSENSUS ALGORITHM

The single most important aspect of the user experience is certainly the time required before a transaction becomes final. This is not solved by a short block time alone — Dfinity’s team also had to reduce the number of confirmations required to a small constant. DFINITY moreover had to provide a provably secure proof-of-stake algorithm that scales to millions of active participants without compromising any bit on decentralization.
Dfinity soon realized that the key to scalability lay in having an unmanipulable source of randomness available. Hence they built a scalable decentralized random beacon, based on what they call the Threshold Relay technique, right into the foundation of the protocol. This strong foundation drives a scalable and fast consensus layer: On top of the beacon runs a blockchain which utilizes notarization by threshold groups to achieve near-instant finality. Details can be found in the overview paper that we are releasing today.
The roots of the DFINITY consensus mechanism date back to 2014 when thair Chief Scientist, Dominic Williams, started to look for more efficient ways to drive large consensus networks. Since then, much research has gone into the protocol and it took several iterations to reach its current design.
For any practical consensus system the difficulty lies in navigating the tight terrain that one is given between the boundaries imposed by theoretical impossibility-results and practical performance limitations.
The first key milestone was the novel Threshold Relay technique for decentralized, deterministic randomness, which is made possible by certain unique characteristics of the BLS signature system. The next breakthrough was the notarization technique, which allows DFINITY consensus to solve the traditional problems that come with proof-of-stake systems. Getting the security proofs sound was the final step before publication.
DFINITY consensus has made the proper trade-offs between the practical side (realistic threat models and security assumptions) and the theoretical side (provable security). Out came a flexible, tunable algorithm, which we expect will establish itself as the best performing proof-of-stake algorithm. In particular, having the built-in random beacon will prove to be indispensable when building out sharding and scalable validation techniques.

2.6 LINKEDUP

The startup has rather cheekily called this “an open version of LinkedIn,” the Microsoft-owned social network for professionals. Unlike LinkedIn, LinkedUp, which runs on any browser, is not owned or controlled by a corporate entity.
LinkedUp is built on Dfinity’s so-called Internet Computer, its name for the platform it is building to distribute the next generation of software and open internet services.
The software is hosted directly on the internet on a Switzerland-based independent data center, but in the concept of the Internet Computer, it could be hosted at your house or mine. The compute power to run the application LinkedUp, in this case — is coming not from Amazon AWS, Google Cloud or Microsoft Azure, but is instead based on the distributed architecture that Dfinity is building.
Specifically, Dfinity notes that when enterprises and developers run their web apps and enterprise systems on the Internet Computer, the content is decentralized across a minimum of four or a maximum of an unlimited number of nodes in Dfinity’s global network of independent data centers.
Dfinity is an open source for LinkedUp to developers for creating other types of open internet services on the architecture it has built.
“Open Social Network for Professional Profiles” suggests that on Dfinity model one can create “Open WhatsApp”, “Open eBay”, “Open Salesforce” or “Open Facebook”.
The tools include a Canister Software Developer Kit and a simple programming language called Motoko that is optimized for Dfinity’s Internet Computer.
“The Internet Computer is conceived as an alternative to the $3.8 trillion legacy IT stack, and empowers the next generation of developers to build a new breed of tamper-proof enterprise software systems and open internet services. We are democratizing software development,” Williams said. “The Bronze release of the Internet Computer provides developers and enterprises a glimpse into the infinite possibilities of building on the Internet Computer — which also reflects the strength of the Dfinity team we have built so far.”
Dfinity says its “Internet Computer Protocol” allows for a new type of software called autonomous software, which can guarantee permanent APIs that cannot be revoked. When all these open internet services (e.g. open versions of WhatsApp, Facebook, eBay, Salesforce, etc.) are combined with other open software and services it creates “mutual network effects” where everyone benefits.
On 1 November, DFINITY has released 13 new public versions of the SDK, to our second major milestone [at WEF Davos] of demoing a decentralized web app called LinkedUp on the Internet Computer. Subsequent milestones towards the public launch of the Internet Computer will involve:
  1. On boarding a global network of independent data centers.
  2. Fully tested economic system.
  3. Fully tested Network Nervous Systems for configuration and upgrades

2.7 WHAT IS MOTOKO?

Motoko is a new software language being developed by the DFINITY Foundation, with an accompanying SDK, that is designed to help the broadest possible audience of developers create reliable and maintainable websites, enterprise systems and internet services on the Internet Computer with ease. By developing the Motoko language, the DFINITY Foundation will ensure that a language that is highly optimized for the new environment is available. However, the Internet Computer can support any number of different software frameworks, and the DFINITY Foundation is also working on SDKs that support the Rust and C languages. Eventually, it is expected there will be many different SDKs that target the Internet Computer.
Full article
submitted by CoinEx_Institution to u/CoinEx_Institution [link] [comments]

Building Ergo: SPV security

There’s often a tension in the crypto world between security and convenience. That trade-off is unacceptable if we want these technologies to be widely used. Here’s how Ergo addresses one common and very important issue.
We all know that the most secure way to use Bitcoin, or any crypto, is to download a copy of the blockchain and run a full node yourself. That way, every time you or anyone else makes a transaction, your client checks the blockchain to ensure it’s valid. You don’t have to trust anyone else.
A full Bitcoin node checks all the blocks in the blockchain (using headers) and makes sure there are no fraudulent transactions. It’s a very secure way of using crypto – but there’s a problem. It requires significant bandwidth, storage and processing power. That kind of commodity hardware is expensive, and using a full node to validate and make transactions is in any case unsuitable for mobile devices. This is particularly true for Bitcoin, where the blockchain is over 270 GB and counting.
SPV
Simplified Payment Verification (SPV) is designed to address this problem, as described in the Bitcoin white paper:
Satoshi notes that this is not a perfect solution, and is vulnerable to an attacker overpowering the network and fooling SPV users.
Moreover, while SPV mode is intended for resource-limited devices, even this ‘lite’ approach is not always feasible. Ethereum’s headers alone total around 5 GB to download. Thus Ethereum mobile clients do not validate chain validity and so blindly have to trust third parties.
There are proposals to reduce the requirements for SPV mode by checking just a few random headers, instead of all of them. But it’s hard to do that securely.
Efficient SPV
Several years have been spent researching and developing secure protocols that allow for efficient SPV clients. The two best-known and most reliable protocols are NiPoPoWs and FlyClient.
Ergo implements NiPoPoWs, or Non-interactive Proof-of-Proof-of-Work. This technology can be explored in full on this dedicated website: https://nipopows.com:
This enables us to build a mobile SPV client that requires around just 100KB of block headers to be downloaded.
A super-efficient Ergo wallet with SPV security is in development, so stay tuned for more updates!
submitted by eleanorcwhite to btc [link] [comments]

Ethereum 2.0: Why, How And Then?

Ethereum 2.0: Why, How And Then?
Why update Ethereum? One problem of the Ethereum network that the update should solve is scalability. At the moment, its blockchain can perform to 15 transactions per second, which is over two times more than that of bitcoin. However, this speed is still not enough for a large number of users. For example, the Visa payment system can perform up to 24 thousand transactions per second.
Adding an Optimistic Rollup technology will help to solve the scalability problem. According to Vitalik Buterin, the creator of Ethereum, its implementation will occur after the network’s update and will increase its throughput to 1000 transactions per second.
by StealthEX
Another solution to this problem is a change in the algorithm. Currently, Ethereum runs on the same protocol as Bitcoin, Proof-of-Work, confirmation of transactions in the cryptocurrency network occurs using the computing power of processors.
Using the Proof-of-Work algorithm limits the growth of the Ethereum network bandwidth. To withstand a large load, more miners are needed, but the growth of their number slows down since it becomes more difficult to mine cryptocurrency and, consequently, less profitable.
This is the reason the Ethereum development team is planning to switch to the Proof-of-Stake algorithm. Unlike the PoW, it does not require the use of computing power to confirm blocks. Instead of miners, transactions will be confirmed by validators. To become a validator, the user should have 32 ETH and install a special client. From a technical point of view, this is easier than buying mining devices and maintaining their functionality, as well as looking for access to cheap electricity. Thus, the system will no longer need expensive hardware.
The main solution to the scalability problem will be to implement sharding. Current Ethereum network is a unified database. After the update, the blockchain will be divided into autonomous, interacting blocks — shards, each of which will process particular transactions and smart contracts, which, however, will be recognized by the entire Ethereum blockchain. Nodes that form the shard process information separately, this allows maintaining the principle of decentralization. This is important since the risk of centralization is another big problem of the old algorithm.
Since the complexity of mining has increased over time, and now this process requires having expensive equipment and access to cheap electricity, small participants can not afford to stay in the game. In such conditions, big pools of miners that can provide higher productivity have a decisive advantage. For example, in April, more than 50% of the computing power of the Ethereum network was provided by only two mining pools. This creates a significant risk of centralization and “51% attacks”.
Validators will confirm transactions and get rewards in the form of passive income. According to the project’s roadmap, this amount will vary from 1.81% to 18.1%. The profitability of the stacking will depend on the number of validators. The more of them, the smaller the amount they get. However, there will be some costs. In the same Ethereum 2.0 roadmap, developers mentioned that the cost of validating transactions, based on rough calculations, will be about $180 per year. One of the developers of the project, Justin Drake, predicts that on average the validator will receive an income of 5% per year.

What is the estimated Ethereum 2.0 release date?

The launch of Ethereum 2.0 will take place gradually, in six stages, the “zero” of which is expected this summer. However, it is worth noting that due to finding vulnerabilities, the dates have already been shifted several times–initially, the transition to the new version was planned in 2019.
One of the developers of the project, Afri Schoedon, said that the launch could be postponed to 2021. According to him, under favourable circumstances, the main network can be presented in November of this year, but there are certain difficulties in this.
Schoedon explained that before launching ETH 2.0, all of its clients must be brought to the same specifications. After that, the developer’s team needs to open a unified deposit contract so that users can transfer their assets from the old chain to the new one. Between these stages, developers also need additional time, so they could test all aspects of the new system.
As it usually happens, there’s going to be two parallel blockchains as a result of the hard fork. The first one, ETH1, will continue to work using an old protocol, while the update will be implemented on ETH2. Users will be able to transfer their coins from the old blockchain to the new one, but not vice versa. The appearance of sharding will allow developers to move to phase 1.5 — during this phase, ETH1 will merge with ETH2, becoming one of the 64 “shards” of the updated blockchain. In the second phase, smart contracts become available on ETH2, which can be considered the full start of its economic activity.

And what are expectations?

Updating the Ethereum network will increase its technical capabilities, namely, it will speed up and reduce the cost of transactions, as well as make the blockchain less vulnerable for centralization process.
Currently, the absolute majority of decentralized finance projects are developed using the Ethereum platform. The Ethereum 2.0 release will probably attract even more partners who will use the blockchain for their projects.
Ryan Watkins, Messari Analysis company’s researcher, highly values the importance of updating.
“ETH 2.0 is a much stronger catalyst than the Bitcoin halving simply because it’s an uncertain and fundamental change.” — Ryan Watkins wrote on his Twitter account
And the part about uncertainty is hard to disagree with. Of course, there are some concerns about the bright Ethereum future. The coming hard fork carries with it potential negative consequences. For example, after switching to the PoS algorithm, the US Securities and Exchange Commission (SEC) may well admit Ethereum as a security, which will lead to legal complications similar to those faced by Pavel Durov when trying to launch his TON blockchain platform.
For now, ETH is the most popular coin for mining at home, and most of these miners will probably just leave the network.
There is also a risk that the price of Ethereum may fall. To receive passive income for storing ETH, the user will not only need to have 32 coins but also block them through a special transaction. They will not be able to withdraw these blocked funds immediately. As stated in the project roadmap, the cryptocurrency withdrawal process will take at least 18 hours. This could take even more time if many users request the return of tokens at the same time. Thus, if ETH falls in price, it will be impossible to sell it immediately, and there is a risk of losing some capital and all the income received from stacking.
Nevertheless, investors are mostly optimistic — the volume of Ethereum options on the Deribit exchange has grown to a historical high, which indicates confidence in the future of Ethereum project. The ETH price is also growing, having overcome the consequences of the March collapse of cryptocurrencies.
Most experts agree that Ethereum price will grow after the update. On the one hand, the altcoin will become more expensive, as it will become a more attractive investment. On the other hand, the offer will decrease, as users will start transferring coins from the first version of the network to the second, to block them for passive income.
If you want to participate in the future fate of the ETH project, you can buy Ethereum using our service. We provide fast, anonymous and limitless swaps between over 250 cryptocurrencies. Just go to StealthEX and follow these easy steps:
✔ Choose the pair and the amount for your exchange. For example BTC to ETH.
✔ Press the “Start exchange” button.
✔ Provide the recipient address to which the coins will be transferred.
✔ Move your cryptocurrency for the exchange.
✔ Receive your coins.
Follow us on Medium, Twitter, Facebook, and Reddit to get StealthEX.io updates and the latest news about the crypto world. For all requests message us via [[email protected]](mailto:[email protected]).
The views and opinions expressed here are solely those of the author. Every investment and trading move involves risk. You should conduct your own research when making a decision.
Original article was posted on https://stealthex.io/blog/2020/06/30/ethereum-2-0-why-how-and-then/.
submitted by Stealthex_io to StealthEX [link] [comments]

Let's discuss some of the issues with Nano

Let's talk about some of Nano's biggest issues. I also made a video about this topic, available here: https://youtu.be/d9yb9ifurbg.
00:12 Spam
Issues
Potential Mitigations & Outstanding Issues
01:58 Privacy
Issues
  • Nano has no privacy. It is pseudonymous (like Bitcoin), not anonymous.
Potential Mitigations & Outstanding Issues & Outstanding Issues*
  • Second layer solutions like mixers can help, but some argue that isn't enough privacy.
  • The current protocol design + the computational overhead of privacy does not allow Nano to implement first layer privacy without compromising it's other features (fast, feeless, and scalable transactions).
02:56 Decentralization
Issues
  • Nano is currently not as decentralized as it could be. ~25% of the voting weight is held by Binance.
  • Users must choose representatives, and users don't always choose the best ones (or never choose).
Potential Mitigations & Outstanding Issues
  • Currently 4 unrelated parties (who all have a verifiable interest in keeping the network running) would have to work together to attack the network
  • Unlike Bitcoin, there is no mining or fees in Nano. This means that there is not a strong incentive for emergent centralization from profit maximization and economies of scale. We've seen this firsthand, as Nano's decentralization has increased over time.
  • Nano representative percentages are not that far off from Bitcoin mining pool percentages.
  • In Nano, voting weight can be remotely re-delegated to anyone at any time. This differs from Bitcoin, where consensus is controlled by miners and requires significant hardware investment.
  • The cost of a 51% attack scales with the market cap of Nano.
06:49 Marketing & adoption
Issues
  • The best technology doesn't always win. If no one knows about or uses Nano, it will die.
Potential Mitigations & Outstanding Issues
  • I would argue that the best technology typically does win, but it needs to be best in every way (price, speed, accessbility, etc). Nano is currently in a good place if you agree with that argument.
  • Bitcoin started small, and didn't spend money on marketing. It takes time to build a community.
  • The developers have said they will market more once the protocol is where they want it to be (v20 or v21?).
  • Community marketing initiatives have started to form organically (e.g. Twitter campaigns, YouTube ads, etc).
  • Marketing and adoption is a very difficult problem to solve, especially when you don't have first mover advantage or consistent cashflow.
08:07 Small developer fund
Issues
  • The developer fund only has 3 million NANO left (~$4MM), what happens after that?
Potential Mitigations & Outstanding Issues
  • The goal for Nano is to be an Internet RFC like TCP/IP or SMTP - development naturally slows down when the protocol is in a good place.
  • Nano development is completely open source, so anyone can participate. Multiple developers are now familiar with the Nano protocol.
  • Businesses and whales that benefit from Nano (exchanges, remittances, merchant services, etc) are incentivized to keep the protocol developed and running.
  • The developer fund was only ~5% of the supply - compare that to some of the other major cryptocurrencies.
10:08 Node incentives
Issues
  • There are no transaction fees, why would people run nodes to keep the network running?
Potential Mitigations & Outstanding Issues
  • The cost of consensus is so low in Nano that the benefits of the network itself are the incentive: decentralized money with 0 transaction fees that can be sent anywhere in the world nearly instantly. Similar to TCP/IP, email servers, and http servers. Just like Bitcoin full nodes.
  • Paying $50-$100 a month for a high-end node is a lot cheaper for merchants than paying 1-3% in total sales.
  • Businesses and whales that benefit from Nano (exchanges, remittances, merchant services, etc) are incentivized to keep the protocol developed and running.
11:58 No smart contracts
Issues
  • Nano doesn't support smart contracts.
Potential Mitigations & Outstanding Issues
  • Nano's sole goal is to be the most efficient peer-to-peer value transfer protocol possible. Adding smart contracts makes keeping Nano feeless, fast, and decentralized much more difficult.
  • Other solutions (e.g. Ethereum) exist for creating and enforcing smart contracts.
  • Code can still interact with Nano, but not on the first layer in a decentralized matter.
  • Real world smart contract adoption and usage is pretty limited at the moment, but that might not always be the case.
13:20 Price stability
Issues
  • Why would anyone accept or spend Nano if the price fluctuates so much?
  • Why wouldn't people just use a stablecoin version of Nano for sending and receiving money?
Potential Mitigations & Outstanding Issues
  • With good fiat gateways (stable, low fees, etc), you can always buy back the fiat equivalent of what you've spent.
  • The hope is that with enough adoption, people and businesses will eventually skip the fiat conversion and use Nano directly.
  • Because Nano is so fast, volatility is less of an issue. Transactions are confirmed in <10 seconds, and prices change less in that timeframe (vs 10 minutes to hours for Bitcoin).
  • Stablecoins reintroduce trust. Stable against what? Who controls the supply, and how do you get people to adopt them? What happens if the assets they're stable against fail? Nano is pure supply and demand.
  • With worldwide adoption, the market capitalization of Nano would be in the trillions. If that happens, even millions of dollars won't move the price significantly.
15:06 Deflation
Issues
  • Nano's current supply == max supply. Why would people spend Nano today if it could be worth more tomorrow?
  • What happens to principal representatives and voting weight as private keys are lost? How do you know keys are lost?
Potential Mitigations & Outstanding Issues
  • Nano is extremely divisible. 1 NANO is 1030 raw. Since there are no transaction fees, smaller and smaller amounts of Nano could be used to transact, even if the market cap reaches trillions.
  • People will always buy things they need (food, housing, etc).
  • I'm not sure what the plan is to adjust for lost keys. Probably requires more discussion.
Long-term Scalability
Issue
  • Current node software and hardware cannot handle thousands of TPS (low-end nodes fall behind at even 50 TPS).
  • The more representatives that exist, the more vote traffic is required (network bandwidth).
  • Low-end nodes currently slow down the network significantly. Principal representatives waste their resources constantly bootstrapping these weak nodes during network saturation.
Potential Mitigations & Outstanding Issues
  • Even as is, Nano can comfortably handle 50 TPS average - which is roughly the amount of transactions per day PayPal was doing in 2011 with nearly 100 million users.
  • Network bandwidth increases 50% a year.
  • There are some discussions of prioritizing bootstrapping by vote weight to limit the impact of weak nodes.
  • Since Nano uses an account balance system, pruning could drastically reduce storage requirements. You only need current state to keep the network running, not the full transaction history.
  • In the future, vote stapling could drastically reduce bandwidth usage by collecting all representative signatures up front and then only sharing that single aggregate signature.
  • Nano has no artificial protocol-based limits (e.g. block sizes or block times). It scales with hardware.
Obviously there is still a lot of work to be done in some areas, but overall I think Nano is a good place. For people that aren't Nano fans, what are your biggest concerns?
submitted by Qwahzi to CryptoCurrency [link] [comments]

Ethereum 2.0: Why, How And Then?

Ethereum 2.0: Why, How And Then?
Why update Ethereum? One problem of the Ethereum network that the update should solve is scalability. At the moment, its blockchain can perform to 15 transactions per second, which is over two times more than that of bitcoin. However, this speed is still not enough for a large number of users. For example, the Visa payment system can perform up to 24 thousand transactions per second.
Adding an Optimistic Rollup technology will help to solve the scalability problem. According to Vitalik Buterin, the creator of Ethereum, its implementation will occur after the network’s update and will increase its throughput to 1000 transactions per second.
by StealthEX
Another solution to this problem is a change in the algorithm. Currently, Ethereum runs on the same protocol as Bitcoin, Proof-of-Work, confirmation of transactions in the cryptocurrency network occurs using the computing power of processors.
Using the Proof-of-Work algorithm limits the growth of the Ethereum network bandwidth. To withstand a large load, more miners are needed, but the growth of their number slows down since it becomes more difficult to mine cryptocurrency and, consequently, less profitable.
This is the reason the Ethereum development team is planning to switch to the Proof-of-Stake algorithm. Unlike the PoW, it does not require the use of computing power to confirm blocks. Instead of miners, transactions will be confirmed by validators. To become a validator, the user should have 32 ETH and install a special client. From a technical point of view, this is easier than buying mining devices and maintaining their functionality, as well as looking for access to cheap electricity. Thus, the system will no longer need expensive hardware.
The main solution to the scalability problem will be to implement sharding. Current Ethereum network is a unified database. After the update, the blockchain will be divided into autonomous, interacting blocks — shards, each of which will process particular transactions and smart contracts, which, however, will be recognized by the entire Ethereum blockchain. Nodes that form the shard process information separately, this allows maintaining the principle of decentralization. This is important since the risk of centralization is another big problem of the old algorithm.
Since the complexity of mining has increased over time, and now this process requires having expensive equipment and access to cheap electricity, small participants can not afford to stay in the game. In such conditions, big pools of miners that can provide higher productivity have a decisive advantage. For example, in April, more than 50% of the computing power of the Ethereum network was provided by only two mining pools. This creates a significant risk of centralization and “51% attacks”.
Validators will confirm transactions and get rewards in the form of passive income. According to the project’s roadmap, this amount will vary from 1.81% to 18.1%. The profitability of the stacking will depend on the number of validators. The more of them, the smaller the amount they get. However, there will be some costs. In the same Ethereum 2.0 roadmap, developers mentioned that the cost of validating transactions, based on rough calculations, will be about $180 per year. One of the developers of the project, Justin Drake, predicts that on average the validator will receive an income of 5% per year.

What is the estimated Ethereum 2.0 release date?

The launch of Ethereum 2.0 will take place gradually, in six stages, the “zero” of which is expected this summer. However, it is worth noting that due to finding vulnerabilities, the dates have already been shifted several times–initially, the transition to the new version was planned in 2019.
One of the developers of the project, Afri Schoedon, said that the launch could be postponed to 2021. According to him, under favourable circumstances, the main network can be presented in November of this year, but there are certain difficulties in this.
Schoedon explained that before launching ETH 2.0, all of its clients must be brought to the same specifications. After that, the developer’s team needs to open a unified deposit contract so that users can transfer their assets from the old chain to the new one. Between these stages, developers also need additional time, so they could test all aspects of the new system.
As it usually happens, there’s going to be two parallel blockchains as a result of the hard fork. The first one, ETH1, will continue to work using an old protocol, while the update will be implemented on ETH2. Users will be able to transfer their coins from the old blockchain to the new one, but not vice versa. The appearance of sharding will allow developers to move to phase 1.5 — during this phase, ETH1 will merge with ETH2, becoming one of the 64 “shards” of the updated blockchain. In the second phase, smart contracts become available on ETH2, which can be considered the full start of its economic activity.

And what are expectations?

Updating the Ethereum network will increase its technical capabilities, namely, it will speed up and reduce the cost of transactions, as well as make the blockchain less vulnerable for centralization process.
Currently, the absolute majority of decentralized finance projects are developed using the Ethereum platform. The Ethereum 2.0 release will probably attract even more partners who will use the blockchain for their projects.
Ryan Watkins, Messari Analysis company’s researcher, highly values the importance of updating.
“ETH 2.0 is a much stronger catalyst than the Bitcoin halving simply because it’s an uncertain and fundamental change.” — Ryan Watkins wrote on his Twitter account
And the part about uncertainty is hard to disagree with. Of course, there are some concerns about the bright Ethereum future. The coming hard fork carries with it potential negative consequences. For example, after switching to the PoS algorithm, the US Securities and Exchange Commission (SEC) may well admit Ethereum as a security, which will lead to legal complications similar to those faced by Pavel Durov when trying to launch his TON blockchain platform.
For now, ETH is the most popular coin for mining at home, and most of these miners will probably just leave the network.
There is also a risk that the price of Ethereum may fall. To receive passive income for storing ETH, the user will not only need to have 32 coins but also block them through a special transaction. They will not be able to withdraw these blocked funds immediately. As stated in the project roadmap, the cryptocurrency withdrawal process will take at least 18 hours. This could take even more time if many users request the return of tokens at the same time. Thus, if ETH falls in price, it will be impossible to sell it immediately, and there is a risk of losing some capital and all the income received from stacking.
Nevertheless, investors are mostly optimistic — the volume of Ethereum options on the Deribit exchange has grown to a historical high, which indicates confidence in the future of Ethereum project. The ETH price is also growing, having overcome the consequences of the March collapse of cryptocurrencies.
Most experts agree that Ethereum price will grow after the update. On the one hand, the altcoin will become more expensive, as it will become a more attractive investment. On the other hand, the offer will decrease, as users will start transferring coins from the first version of the network to the second, to block them for passive income.
If you want to participate in the future fate of the ETH project, you can buy Ethereum using our service. We provide fast, anonymous and limitless swaps between over 250 cryptocurrencies. Just go to StealthEX and follow these easy steps:
✔ Choose the pair and the amount for your exchange. For example BTC to ETH.
✔ Press the “Start exchange” button.
✔ Provide the recipient address to which the coins will be transferred.
✔ Move your cryptocurrency for the exchange.
✔ Receive your coins.
Follow us on Medium, Twitter and Reddit to get StealthEX.io updates and the latest news about the crypto world. For all requests message us via [[email protected]](mailto:[email protected]).
The views and opinions expressed here are solely those of the author. Every investment and trading move involves risk. You should conduct your own research when making a decision.
Original article was posted on https://stealthex.io/blog/2020/06/30/ethereum-2-0-why-how-and-then/.
submitted by Stealthex_io to conspiracy [link] [comments]

Building Ergo: SPV security

There’s often a tension in the crypto world between security and convenience. That trade-off is unacceptable if we want these technologies to be widely used. Here’s how Ergo addresses one common and very important issue.
We all know that the most secure way to use Bitcoin, or any crypto, is to download a copy of the blockchain and run a full node yourself. That way, every time you or anyone else makes a transaction, your client checks the blockchain to ensure it’s valid. You don’t have to trust anyone else.
A full Bitcoin node checks all the blocks in the blockchain (using headers) and makes sure there are no fraudulent transactions. It’s a very secure way of using crypto – but there’s a problem. It requires significant bandwidth, storage and processing power. That kind of commodity hardware is expensive, and using a full node to validate and make transactions is in any case unsuitable for mobile devices. This is particularly true for Bitcoin, where the blockchain is over 270 GB and counting.
SPV
Simplified Payment Verification (SPV) is designed to address this problem, as described in the Bitcoin white paper:
Satoshi notes that this is not a perfect solution, and is vulnerable to an attacker overpowering the network and fooling SPV users.
Moreover, while SPV mode is intended for resource-limited devices, even this ‘lite’ approach is not always feasible. Ethereum’s headers alone total around 5 GB to download. Thus Ethereum mobile clients do not validate chain validity and so blindly have to trust third parties.
There are proposals to reduce the requirements for SPV mode by checking just a few random headers, instead of all of them. But it’s hard to do that securely.
Efficient SPV
Several years have been spent researching and developing secure protocols that allow for efficient SPV clients. The two best-known and most reliable protocols are NiPoPoWs and FlyClient.
Ergo implements NiPoPoWs, or Non-interactive Proof-of-Proof-of-Work. This technology can be explored in full on this dedicated website: https://nipopows.com:
This enables us to build a mobile SPV client that requires around just 100KB of block headers to be downloaded.
A super-efficient Ergo wallet with SPV security is in development, so stay tuned for more updates!
submitted by eleanorcwhite to btc [link] [comments]

Building Ergo: SPV security

There’s often a tension in the crypto world between security and convenience. That trade-off is unacceptable if we want these technologies to be widely used. Here’s how Ergo addresses one common and very important issue.
We all know that the most secure way to use Bitcoin, or any crypto, is to download a copy of the blockchain and run a full node yourself. That way, every time you or anyone else makes a transaction, your client checks the blockchain to ensure it’s valid. You don’t have to trust anyone else.
A full Bitcoin node checks all the blocks in the blockchain (using headers) and makes sure there are no fraudulent transactions. It’s a very secure way of using crypto – but there’s a problem. It requires significant bandwidth, storage and processing power. That kind of commodity hardware is expensive, and using a full node to validate and make transactions is in any case unsuitable for mobile devices. This is particularly true for Bitcoin, where the blockchain is over 270 GB and counting.
SPV
Simplified Payment Verification (SPV) is designed to address this problem, as described in the Bitcoin white paper:
Satoshi notes that this is not a perfect solution, and is vulnerable to an attacker overpowering the network and fooling SPV users.
Moreover, while SPV mode is intended for resource-limited devices, even this ‘lite’ approach is not always feasible. Ethereum’s headers alone total around 5 GB to download. Thus Ethereum mobile clients do not validate chain validity and so blindly have to trust third parties.
There are proposals to reduce the requirements for SPV mode by checking just a few random headers, instead of all of them. But it’s hard to do that securely.
Efficient SPV
Several years have been spent researching and developing secure protocols that allow for efficient SPV clients. The two best-known and most reliable protocols are NiPoPoWs and FlyClient.
Ergo implements NiPoPoWs, or Non-interactive Proof-of-Proof-of-Work. This technology can be explored in full on this dedicated website: https://nipopows.com:
This enables us to build a mobile SPV client that requires around just 100KB of block headers to be downloaded.
A super-efficient Ergo wallet with SPV security is in development, so stay tuned for more updates!
Share post:
Facebook
Twitter
Ergoplatform.org
submitted by kushti to ergoplatformorg [link] [comments]

Building Ergo: Lite full nodes

Ergo allows any user to run a full node with low resources – meaning you can help maintain the network with a device as simple as a Raspberry Pi.
In a previous post, we looked at Ergo’s SPV mode, which allows for secure, efficient mobile clients. This enables users to make transactions using almost any device.
At the other end of the scale, you might want to run a full node. If you’re a miner, this will require that you download the full blockchain, because you’ll need the whole UTXO (unspent outputs) set to mine new blocks. But you can still run a full node without that UTXO set – vastly reducing the specification and expense of the hardware needed.
Ergo blocks
In Ergo, just like Bitcoin, Ethereum and other blockchains, blocks are broken into sections. In Bitcoin, there’s simply a block header and the transactions themselves. But in Ergo, we have some extra sections that enable new functionality:
The ‘extension’ section contains certain mandatory fields (including links for NiPoPoW, once per 1,024 block epoch) and parameters for miner voting, such as current block size. It can also contain arbitrary fields.
What this means in practice is that different types of node and client can download only those sections of the blocks they need – reducing the demands for storage, bandwidth and CPU cycles.
Lite full nodes
While miners need to download everything, lite full nodes only need the transactions and proofs. This means they have a cryptographic guarantee of transactions, without holding the full UTXO set itself.
Lite full nodes check the proofs generated by full nodes (including miners) who do hold the full blockchain, providing a guarantee of ledger validity. In Ethereum, these nodes are called Stateless Clients.
For Ergo, it means you can run a full node and maintain the network with a device as simple as a Raspberry Pi with 512 MB RAM. This provides the ideal balance between ensuring the security of the network and placing an unnecessary burden on users who wish to do so – improving decentralisation and democratising participation in the Ergo network and community.
Share post:
Facebook
Twitter
Ergoplatform.org
submitted by kushti to ergoplatformorg [link] [comments]

Building Ergo: SPV security

There’s often a tension in the crypto world between security and convenience. That trade-off is unacceptable if we want these technologies to be widely used. Here’s how Ergo addresses one common and very important issue.
We all know that the most secure way to use Bitcoin, or any crypto, is to download a copy of the blockchain and run a full node yourself. That way, every time you or anyone else makes a transaction, your client checks the blockchain to ensure it’s valid. You don’t have to trust anyone else.
A full Bitcoin node checks all the blocks in the blockchain (using headers) and makes sure there are no fraudulent transactions. It’s a very secure way of using crypto – but there’s a problem. It requires significant bandwidth, storage and processing power. That kind of commodity hardware is expensive, and using a full node to validate and make transactions is in any case unsuitable for mobile devices. This is particularly true for Bitcoin, where the blockchain is over 270 GB and counting.
SPV
Simplified Payment Verification (SPV) is designed to address this problem, as described in the Bitcoin white paper:
Satoshi notes that this is not a perfect solution, and is vulnerable to an attacker overpowering the network and fooling SPV users.
Moreover, while SPV mode is intended for resource-limited devices, even this ‘lite’ approach is not always feasible. Ethereum’s headers alone total around 5 GB to download. Thus Ethereum mobile clients do not validate chain validity and so blindly have to trust third parties.
There are proposals to reduce the requirements for SPV mode by checking just a few random headers, instead of all of them. But it’s hard to do that securely.
Efficient SPV
Several years have been spent researching and developing secure protocols that allow for efficient SPV clients. The two best-known and most reliable protocols are NiPoPoWs and FlyClient.
Ergo implements NiPoPoWs, or Non-interactive Proof-of-Proof-of-Work. This technology can be explored in full on this dedicated website: https://nipopows.com:
This enables us to build a mobile SPV client that requires around just 100KB of block headers to be downloaded.
A super-efficient Ergo wallet with SPV security is in development, so stay tuned for more updates!
submitted by eleanorcwhite to CryptoMarkets [link] [comments]

Building Ergo: SPV security

There’s often a tension in the crypto world between security and convenience. That trade-off is unacceptable if we want these technologies to be widely used. Here’s how Ergo addresses one common and very important issue.
We all know that the most secure way to use Bitcoin, or any crypto, is to download a copy of the blockchain and run a full node yourself. That way, every time you or anyone else makes a transaction, your client checks the blockchain to ensure it’s valid. You don’t have to trust anyone else.
A full Bitcoin node checks all the blocks in the blockchain (using headers) and makes sure there are no fraudulent transactions. It’s a very secure way of using crypto – but there’s a problem. It requires significant bandwidth, storage and processing power. That kind of commodity hardware is expensive, and using a full node to validate and make transactions is in any case unsuitable for mobile devices. This is particularly true for Bitcoin, where the blockchain is over 270 GB and counting.
SPV
Simplified Payment Verification (SPV) is designed to address this problem, as described in the Bitcoin white paper:
Satoshi notes that this is not a perfect solution, and is vulnerable to an attacker overpowering the network and fooling SPV users.
Moreover, while SPV mode is intended for resource-limited devices, even this ‘lite’ approach is not always feasible. Ethereum’s headers alone total around 5 GB to download. Thus Ethereum mobile clients do not validate chain validity and so blindly have to trust third parties.
There are proposals to reduce the requirements for SPV mode by checking just a few random headers, instead of all of them. But it’s hard to do that securely.
Efficient SPV
Several years have been spent researching and developing secure protocols that allow for efficient SPV clients. The two best-known and most reliable protocols are NiPoPoWs and FlyClient.
Ergo implements NiPoPoWs, or Non-interactive Proof-of-Proof-of-Work. This technology can be explored in full on this dedicated website: https://nipopows.com:
This enables us to build a mobile SPV client that requires around just 100KB of block headers to be downloaded.
A super-efficient Ergo wallet with SPV security is in development, so stay tuned for more updates!
submitted by eleanorcwhite to CryptoCurrencies [link] [comments]

What are Nano's biggest issues? Let's talk about it!

Let's talk about some of Nano's biggest issues. I also made a video about this topic, available here: https://youtu.be/d9yb9ifurbg.
00:12 Spam
Issues
Potential Mitigations & Outstanding Issues
01:58 Privacy
Issues
  • Nano has no privacy. It is pseudonymous (like Bitcoin), not anonymous.
Potential Mitigations & Outstanding Issues & Outstanding Issues*
  • Second layer solutions like mixers can help, but some argue that isn't enough privacy.
  • The current protocol design + the computational overhead of privacy does not allow Nano to implement first layer privacy without compromising it's other features (fast, feeless, and scalable transactions).
02:56 Decentralization
Issues
  • Nano is currently not as decentralized as it could be. ~25% of the voting weight is held by Binance.
  • Users must choose representatives, and users don't always choose the best ones (or never choose).
Potential Mitigations & Outstanding Issues
  • Currently 4 unrelated parties (who all have a verifiable interest in keeping the network running) would have to work together to attack the network
  • Unlike Bitcoin, there is no mining or fees in Nano. This means that there is not a strong incentive for emergent centralization from profit maximization and economies of scale. We've seen this firsthand, as Nano's decentralization has increased over time.
  • Nano representative percentages are not that far off from Bitcoin mining pool percentages.
  • In Nano, voting weight can be remotely re-delegated to anyone at any time. This differs from Bitcoin, where consensus is controlled by miners and requires significant hardware investment.
  • The cost of a 51% attack scales with the market cap of Nano.
06:49 Marketing & adoption
Issues
  • The best technology doesn't always win. If no one knows about or uses Nano, it will die.
Potential Mitigations & Outstanding Issues
  • I would argue that the best technology typically does win, but it needs to be best in every way (price, speed, accessbility, etc). Nano is currently in a good place if you agree with that argument.
  • Bitcoin started small, and didn't spend money on marketing. It takes time to build a community.
  • The developers have said they will market more once the protocol is where they want it to be (v20 or v21?).
  • Community marketing initiatives have started to form organically (e.g. Twitter campaigns, YouTube ads, etc).
  • Marketing and adoption is a very difficult problem to solve, especially when you don't have first mover advantage or consistent cashflow.
08:07 Small developer fund
Issues
  • The developer fund only has 3 million NANO left (~$4MM), what happens after that?
Potential Mitigations & Outstanding Issues
  • The goal for Nano is to be an Internet RFC like TCP/IP or SMTP - development naturally slows down when the protocol is in a good place.
  • Nano development is completely open source, so anyone can participate. Multiple developers are now familiar with the Nano protocol.
  • Businesses and whales that benefit from Nano (exchanges, remittances, merchant services, etc) are incentivized to keep the protocol developed and running.
  • The developer fund was only ~5% of the supply - compare that to some of the other major cryptocurrencies.
10:08 Node incentives
Issues
  • There are no transaction fees, why would people run nodes to keep the network running?
Potential Mitigations & Outstanding Issues
  • The cost of consensus is so low in Nano that the benefits of the network itself are the incentive: decentralized money with 0 transaction fees that can be sent anywhere in the world nearly instantly.
  • Paying $50-$100 a month for a high-end node is a lot cheaper for merchants than paying 1-3% in total sales.
  • Businesses and whales that benefit from Nano (exchanges, remittances, merchant services, etc) are incentivized to keep the protocol developed and running.
11:58 No smart contracts
Issues
  • Nano doesn't support smart contracts.
Potential Mitigations & Outstanding Issues
  • Nano's sole goal is to be the most efficient peer-to-peer value transfer protocol possible. Adding smart contracts makes keeping Nano feeless, fast, and decentralized much more difficult.
  • Other solutions (e.g. Ethereum) exist for creating and enforcing smart contracts.
  • Code can still interact with Nano, but not on the first layer in a decentralized matter.
  • Real world smart contract adoption and usage is pretty limited at the moment, but that might not always be the case.
13:20 Price stability
Issues
  • Why would anyone accept or spend Nano if the price fluctuates so much?
  • Why wouldn't people just use a stablecoin version of Nano for sending and receiving money?
Potential Mitigations & Outstanding Issues
  • With good fiat gateways (stable, low fees, etc), you can always buy back the fiat equivalent of what you've spent.
  • The hope is that with enough adoption, people and businesses will eventually skip the fiat conversion and use Nano directly.
  • Because Nano is so fast, volatility is less of an issue. Transactions are confirmed in <10 seconds, and prices change less in that timeframe (vs 10 minutes to hours for Bitcoin).
  • Stablecoins reintroduce trust. Stable against what? Who controls the supply, and how do you get people to adopt them? What happens if the assets they're stable against fail? Nano is pure supply and demand.
  • With worldwide adoption, the market capitalization of Nano would be in the trillions. If that happens, even millions of dollars won't move the price significantly.
15:06 Deflation
Issues
  • Nano's current supply == max supply. Why would people spend Nano today if it could be worth more tomorrow?
  • What happens to principal representatives and voting weight as private keys are lost? How do you know keys are lost?
Potential Mitigations & Outstanding Issues
  • Nano is extremely divisible. 1 NANO is 1030 raw. Since there are no transaction fees, smaller and smaller amounts of Nano could be used to transact, even if the market cap reaches trillions.
  • People will always buy things they need (food, housing, etc).
  • I'm not sure what the plan is to adjust for lost keys. Probably requires more discussion.
Long-term Scalability
Issue
  • Current node software and hardware cannot handle thousands of TPS (low-end nodes fall behind at even 50 TPS).
  • The more representatives that exist, the more vote traffic is required (network bandwidth).
  • Low-end nodes currently slow down the network significantly. Principal representatives waste their resources constantly bootstrapping these weak nodes during network saturation.
Potential Mitigations & Outstanding Issues
  • Even as is, Nano can comfortably handle 50 TPS average - which is roughly the amount of transactions per day PayPal was doing in 2011 with nearly 100 million users.
  • Network bandwidth increases 50% a year.
  • There are some discussions of prioritizing bootstrapping by vote weight to limit the impact of weak nodes.
  • Since Nano uses an account balance system, pruning could drastically reduce storage requirements. You only need current state to keep the network running, not the full transaction history.
  • In the future, vote stapling could drastically reduce bandwidth usage by collecting all representative signatures up front and then only sharing that single aggregate signature.
  • Nano has no artificial protocol-based limits (e.g. block sizes or block times). It scales with hardware.
submitted by Qwahzi to nanocurrency [link] [comments]

Pascal Andy - YouTube Bitcoin Q&A: Scaling and Bitcoin Q&A: Full node and home network security RSK - Install a node with Docker THE THETA NETWORK FULL REVIEW & NODE SETUP

Full node is a program that fully validates transactions and blocks. Almost all full nodes also help the network by accepting transactions and blocks from other full nodes, validating those transactions and blocks, and then relaying them to further full nodes.. Most full nodes also serve lightweight clients by allowing them to transmit their transactions to the network and by notifying them ... Running a full node is the only way you can use Bitcoin in a trustless way. You will know for sure that all the rules of Bitcoin are being followed, for example that no bitcoins are spent not belonging to the owner, that no coins were spent twice, that no inflation happens outside of the schedule and that all the rules needed to make the system work (e.g. difficulty ) are followed. There are various ways you can run a Bitcoin Cash full node whether it's in a cloud, on a local machine or by leveraging a small single-board computer. How to set up your Bitcoin full node. Since we’ve looked at the benefits of running a Bitcoin full node and what you’ll need, let’s take a look at how to set up your Bitcoin full node. Step 1: Hardware. You can either choose to run your full node on your computer, or on an external device like a Raspberry Pi 3+. Step 2: Choose your OS Full nodes are the lifeblood of bitcoin’s consensus, as validation of transactions is the underlying component that gives transactions authenticity, and subsequently, value.Unsurprisingly, reducing the costs (i.e., data storage, bandwidth, etc.) associated with running these validation nodes has a premium in the bitcoin community.

[index] [47887] [45289] [44945] [34176] [13002] [38279] [50971] [13623] [8191] [30369]

Pascal Andy - YouTube

Are there full node starter kits? Could bitcoin nodes be cut off if ISPs blocked port 8333? How do nodes connect to each other? How could you route around censorship? Will Bitcoin evolve into a ... How big will the Bitcoin blockchain get? How will optimisations that conserve space affect growth, while keeping full nodes affordable to run? Why should we add second layers for microtransactions ... Satoshi built a viral loop into Bitcoin's protocol: “As the number of users grows, the value per coin increases. It has the potential for a positive feedback... What is a Bitcoin Full Node? Why would I want one? - Duration: 11:06. Off Chain with Jimmy Song 18,874 views. 11:06. Mix Play all Mix - Off Chain with Jimmy Song YouTube; LSAT, paying for API ... Theta streaming protocol supports full Video-on-Demand (VoD) and distributed storage functionality 2020 Q2 The first implementation of decentralized file-sharing of any content type, such as game ...

#