Gavin Wood's 2014 Web3.0 Forecast Reminder
As we move forward into the future, we need the confidence of continuous communication.
Even before Snowden, we knew it was dangerous to trust our information anywhere on the Internet. But in the aftermath of the Snowden incident, this knowledge often comes back to those who believe that big business and governments often seek to overstep their power. Therefore, we know that providing information to participating organizations is a bad model. The opportunity for the organization not to process the data is only the amount of effort required to extract the desired results. Since standard business monetization often requires as much training as possible for its users, potential customers will find that the potential for negative feedback is not overestimated.
The process and technology of the network, if not the entire Internet, is a good overview of the technology. SMTP, FTP, HTTP (S), PHP, HTML, and Javascript All of these great tools support the cloud-based applications we see today: Google Drive, Facebook, and Twitter. Games, shopping, banking and chat. However, in the future, these technologies and technologies must be updated according to the new understanding of the interaction between community and technology.
What you might call Web 3.0 or “post-snow” networks is that we are already using social networks, but the pattern of interaction between the two parties is different. We will disclose information considered to be public. We assume the information is the same and put it in the agreement. We keep information confidential and do not disclose information that we believe is not public. Communication is always done through an encrypted channel, only an anonymous ID is used as an endpoint, nothing traceable (like an IP address) is used.
In short, since no government or organization should have faith, the system we are creating uses the old ideas of mathematics.
Since the Snowden era, the network has had four components: delivering static content, strong data, unreliable marketing, and user engagement.
published
First of all, we already have more decentralized and encrypted information sharing systems. What this machine does is get a short address (i.e. a hash value) of some data, and if needed in the future, the data itself can be obtained as a hash of value. You can submit new information. When downloading, we can guarantee that the data is correct as the hash value is implied. This static declaration typically works over HTTP (S) and all other FTP functions. There are many uses for this technology, but the simplest is BitTorrent. Simply ask the user to download the file corresponding to the hash value every time they click on the BitTorrent magnet link.
In Web 3.0, some of this technology is used to publish and download static (possibly large) files that we are happy to share. Like BitTorrent, we can encourage others to manage and share this information, but integration with other components of Web 3.0 can make it more efficient and accurate. The support framework is designed to prevent DDoS attacks as it is included in the contract. And this bonus?
Communication
The second is Web 3.0, which consists of stand-alone, anonymous, and discreet submissions. It is used for human communication on the Internet. It uses strong encryption for certain data authentications, which can be encrypted with a public key so that only the identity can be decrypted. By signing with the sender's private key, you can identify them to the sender and provide the recipient with secure communications. Confidentiality can be an opportunity for secure communication, including between groups, without proof of receipt.
No need to use hierarchical transport addresses as all of them provide final logistics, and addresses that once had a user or port and the current IP address are just hash values.
Messages have a long lifespan and you can tell the difference between announcements and instant notifications. The first is kept as long as possible for as many identifiers as possible, and in the end I hope. It will be sent out of the network as soon as possible. So binary options trading and longevity may change.
In fact, the physical routing is done through a game-sensitive adaptive network system. Each node seeks its own advantage over other nodes, claiming that other nodes are important to incoming data. One of its precious files is truncated and its location linked to another of its unknown location (or perhaps the second of its being). To be more efficient, it needs data that contains special information (such as the address or contents of the sender, is not encrypted and begins with a special chain of products).
In Web 3.0, the interface allows nodes to communicate, modify and self-regulate in real time and broadcast data later without the need for trust or important data. In a normal website, this is the majority of data over HTTP in an AJAX-style implementation.
agreement
The third version of Web 3.0 is approved by the engine. Bitcoin brings the idea of consent as an application to many of us. However, this is only the first necessary step. Consensus Engines are a way to get approval for certain collision rules because they know that future collisions (or no collisions) will determine and fail as rules. In fact, it is all of these collective agreements that are pulling the strings through a series of agreements.
The fact that the failure of one contract can affect all other contracts is the key to establishing a contractual relationship, thereby reducing the risk of breach. For example, the greater the separation of reputation and the person-to-person relationship, the greater the impact. A reputation combined with similar features of Facebook or Twitter would work better than without it. Because your worth depends on what your friends, partners or colleagues think of you. A particular example is whether and when you make friends with employers or go on Facebook.
The journal engine is used for all trusted data reports and edits. This is caused by the whole international market. The first example is the Ethereum project.
Existing networks have not solved the consensus problem and have regained the trust of legitimate organizations such as ICANN, Verisign and Facebook.
front end
The fourth and final feature of Web 3.0 is the technology that combines both a "browser" and a user interface. Interestingly, it looks like the browser interface we already know and love. There is a URI bar, a back button and of course most of the sharing is used to display Dapp (eg website / webpage).
This name resolution solution (different from Namecoin on the app) can be used to simplify the URI for a specific address (e.g. hash value) of the transfer app. Along with publishing files, this can be extended to include files that are needed for the end result, such as files that contain .html, .js, .css, and .jpg. This is a static dapp (-let).
We provide support through other communications that do not include dynamic content. In order to compile and convey weak and public content, the basis of that content must be true and remain the same at all times, such as reputation, equivalency and the like. ("Fixed and cannot be changed"). API used to communicate with consensus engine interactions. Peer-to-peer messaging engines can be used to compile and transmit content in a robust and robust manner, where the content may be unstable, disruptive, or non-existent.
There will be a few exceptions. You will still see the use of the user-server URL pattern such as "https: // address / path" being replaced with "goldcoin" and "uk.gov". It's a new address. The name solution is accomplished by accepting a contract journal, which users can easily redirect or renew. This time allows several levels of nominal resolution. For example, "uk.gov" might be given the subname "gov" of the service resolution name provided by "uk".
It can be seen that the backend DApps or dapplets play an important role due to the constant and unhealthy nature of the information received or given to the browsers by approving the updates and backend applications monitoring the peer-to-peer network. Thanks to the Web 3.0 experience.
After the initial sync process, the page load time is reduced to zero because the static data is first extracted to ensure that the dynamic data (passed through the recommended P2P engine). mail log) are also stored as: update. The actual data occurs when the sync is not updated, but the user experience is very reliable.
For Web 3.0 users, all interactions are anonymous and secure, which is not trusted by many services. For services that require a third party, these tools allow users and application developers to extend the reliability of several different sites, reducing the need to place people in the hands of a special place. of religion.
As the frontend and backend APIs become more separate, we will see additional features using different solutions to provide a better user experience. For example, QtQuick and QML technologies from Qt can change the HTML / CSS mix of traditional websites, delivering local links and graphics quickly with minimal content and good performance results.
Upgrade to Web3.0
This change will be gradual.
With Web 2.0, more and more sites will be using Web 3.0 components on the backend, such as Bitcoin, BitTorrent, and Namecoin. These models will continue and Ethereum, the latest Web 3.0 platform, will often be used by websites that want to prove their content, such as polling stations and exchanges. . Of course, system security depends on weak connections, so these sites can eventually switch to 3.0 web browsers, providing ultimate security and zero interference.
Meet the Web 3.0. It would be a security operation.
Scan QR code with WeChat