Quantum Computing. Melanie Swan

Читать онлайн.
Название Quantum Computing
Автор произведения Melanie Swan
Жанр Физика
Серия Between Science and Economics
Издательство Физика
Год выпуска 0
isbn 9781786348227



Скачать книгу

the raw data being disclosed (Ferrer et al., 2019). The swarm agents could use the secure communications and consensus-reaching properties of blockchains to coordinate, self-govern, and problem-solve. Further, zero-knowledge technology (which separates data verification from the underlying data) could be used in two ways, for an agent to obtain the Merkle tree-stored data relevant to its own activity, and to prove its integrity to peers by exchanging cryptographic proofs.

      Various features of blockchains are implicated in this advanced swarm robotics model. The basic features are privacy and secure communication. Then, consensus technology is used for reaching a self-orchestrated group agreement without a centralized authority. Merkle tree path-addressing is used that only exposes need-to-know information. Finally, zero-knowledge proofs are used to prove earnest participation to peers without revealing any underlying personal information.

      2.3.4.2Deep learning chains

      Deep learning chains refers to the concept of a further convergence of smart networks in the notion of a generalized control technology that has properties of both blockchain and deep learning (Swan, 2018). Deep learning chains instantiate the secure automation, audit-log tracking, remunerability, and validated transaction execution of blockchains, and the object identification (IDtech), pattern recognition, and optimization technology of deep learning. Deep learning chains in the form of block-chain-based reinforcement learning have been proposed for an air traffic control system (Duong et al., 2019). Also, deep learning chains might be used as a general control technology for fleet-many internet-connected smart network technologies such as UAVs, drones, automated supply chain networks, robotic swarms, autonomous vehicle networks, and space logistics platforms. The minimal functionality of deep learning chains in autonomous driving fleets is identifying objects in a driving field (deep learning) and tracking vehicle activity (blockchain). Deep learning chains could likewise apply to the body, as a smart network control technology for medical nanorobots, identifying pathogens (deep learning) and tracking and expunging them (blockchain smart contracts). There could be greater convergence between individual smart network technology platforms (listed in Table 2.4 per their operating focus). For example, block-chains are starting to appear more regularly in the context of smart city power grid management (Pieroni et al., 2018).

      2.3.4.3Deep learning proofs

      Computational proofs are a mechanistic set of algorithms that could be incorporated as a feature in many smart network technology systems to provide privacy and validation. The potentially wide-scale adoption of zero-knowledge proof technology in blockchains makes blockchains a PrivacyTech and a ProofTech. Zero-knowledge proof technology could be similarly adopted in other smart network systems such as machine learning, for example in the idea of deep learning proofs. The first reason is the usual use case for proofs, to prove validity. This could be an important functionality in image recognition networks in autonomous driving for example, where the agent (the vehicle) is able to prove that certain behaviors were taken. Another reason to use proof technology is because proofs are an efficient mechanism with wider applicability beyond the proof execution context.

Smart network Smart network operational focus
1.Unmanned aerial vehicles (UAVs) UAV drones with autonomous strike capability
2.High-frequency trading (HFT) Algorithmic trading (40% US equities), auto-hedging
3.Real-time bidding (RTB) Automated digital advertising placement
4.Energy smart grids Power grid load-balancing and transfer
5.Blockchain economic networks Transaction validation, self-governance, smart contracts
6.Deep learning networks Object identification (IDtech), pattern recognition, optimization
7.Smart City IoT sensor landscapes Traffic navigation, data climate, global information feeds
8.Industrial robotics cloudminds Industrial coordination (cloud-connected smart machines)
9.Supply chain logistics nets Automated sourcing, ordering, shipping, receiving, payment
10.Personal robotic assistant nets Personalization, backup, software updates, fleet coordination
11.Space: aerial logistics rings In situ resource provisioning, asynchronous communication

      A central challenge in deep learning systems, which occupies a significant portion of research effort, is developing systems to efficiently calculate the error contribution of each node to the overall system processing. Various statistical error assessment methods are employed such as mean squared error (MSE), sum of squared errors of prediction (SSE), cross-entropy (softmax), and softplus (a smoothing function). An improved error contribution calculation method would be very helpful.

      Proofs might be a useful solution because they are an information compression technique. Some portion of activity is conducted and the abstracted output is all that is necessary as a result (the proof evaluates to a one-bit True/False answer or some other short answer). With a proof structure, deep learning perceptrons could communicate their results using fewer information bits than they do now. The perceptron is a two-tier information system, with meta-attributes about its processing (error contribution, weights, biases) and the underlying values computed in the processing. The proof structure could be instantiated in the TensorFlow software architecture so that the proofs would be automatically generated as a feature that flows through the system’s matrix multiplications. The concept of a proof is that some underlying work is performed and a validated short answer is produced as the result. The idea of deep learning proofs is that in a deep learning system, perceptrons could execute a proof of their node’s contribution.

      Deep learning consensus algorithms is another idea, in which consensus algorithms would be employed in deep learning systems such that perceptrons self-coordinate answers. Through the deep learning consensus algorithms, the perceptrons could self-orchestrate the processing of the nodes, and also their initial setup into an optimal configuration of layers and nodes for the problem at hand. Consensus technology is a mechanism for self-organization and governance in multi-agent systems. Deep learning consensus algorithms build on the idea of deploying consensus technologies in robotic swarms to self-coordinate to achieve mission objectives (Ferrer et al., 2019).

      The notion of smart network theory as a physical basis for smart network technologies is developed into the SNFT and the SNQFT, with respect to the two scale domains. The intuition is that the way to orchestrate many-particle systems from a characterization, control, criticality, and novelty emergence standpoint is through field theories such as an SNFT and an SNQFT. Such theories should be able to make relevant predictions about smart network systems as part of their operation.

      Large-scale networks are a feature of contemporary reality. Such network entities are complex systems comprising thousands to billions of elements, and require a SNFT or other similar mechanism for the automated characterization, monitoring, and control of their activity. A theoretically-grounded model is needed, and smart network theories based on statistical physics (statistical neural field theory and spin-glass models), information theory (the AdS/CFT correspondence), and model systems are proposed. Generically, a SNFT (conventional or quantum) is a field theory for the characterization, monitoring, and control of smart network systems, particularly for criticality detection and fleet-many item management.