• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Decentralized Heterogeneous Federal Distillation Learning Based on Blockchain

    2023-10-26 13:14:36HongZhuLishaGaoYitianShaNanXiangYueWuandShuoHan
    Computers Materials&Continua 2023年9期

    Hong Zhu,Lisha Gao,Yitian Sha,Nan Xiang,Yue Wu and Shuo Han

    Nanjing Power Supply Branch,State Grid Jiangsu Electric Power Co.,Ltd.,Nanjing,210000,China

    ABSTRACT Load forecasting is a crucial aspect of intelligent Virtual Power Plant(VPP)management and a means of balancing the relationship between distributed power grids and traditional power grids.However,due to the continuous emergence of power consumption peaks,the power supply quality of the power grid cannot be guaranteed.Therefore,an intelligent calculation method is required to effectively predict the load,enabling better power grid dispatching and ensuring the stable operation of the power grid.This paper proposes a decentralized heterogeneous federated distillation learning algorithm(DHFDL)to promote trusted federated learning(FL)between different federates in the blockchain.The algorithm comprises two stages:common knowledge accumulation and personalized training.In the first stage,each federate on the blockchain is treated as a meta-distribution.After aggregating the knowledge of each federate circularly,the model is uploaded to the blockchain.In the second stage,other federates on the blockchain download the trained model for personalized training,both of which are based on knowledge distillation.Experimental results demonstrate that the DHFDL algorithm proposed in this paper can resist a higher proportion of malicious code compared to FedAvg and a Blockchain-based Federated Learning framework with Committee consensus(BFLC).Additionally,by combining asynchronous consensus with the FL model training process,the DHFDL training time is the shortest,and the training efficiency of decentralized FL is improved.

    KEYWORDS Load forecasting;blockchain;distillation learning;federated learning;DHFDL algorithm

    1 Introduction

    With the increasingly prominent natural environmental problems,the constraint of fossil energy becomes tighter.It is imperative to develop renewable energy and promote national energy transformation vigorously.However,due to its small capacity,decentralized layout,and strong randomness of output,distributed renewable energy will have an impact on the security and reliability of the grid when it is connected to the grid alone,so it is difficult to participate in the power market competition as an independent individual.The large-scale integration of distributed power grid requires intelligent centralized management to coordinate the power grid effectively.As an important form of intelligent management,VPP[1]not only fosters enhanced interaction among users but also bolsters the stability of the power grid.However,the continuous emergence of power consumption peaks increases the load of virtual power plants,affecting the grid’s power supply quality.Consequently,the VPP requires a sophisticated computational approach to accurately predict load demands,thereby enabling more efficient power grid dispatching and management.

    In the load forecasting business,each VPP platform precipitates large enterprise power consumption data.The accuracy of power load forecasting can be improved through cross-domain collaborative computing of data.Traditional machine learning requires centralized training of data.However,in the cross-domain transmission link of data within the power grid,there are problems such as data theft,data tampering,data power,responsibility separation,and low transmission efficiency.The power grid has the right to access the power data of the enterprise,which can be viewed internally,but does not have the right to disclose the data.Once the power data of the enterprise is stolen,sold,and disclosed in the transmission link,it will cause a great blow to the credibility of the power grid.At the same time,the power and responsibility caused by the cross-domain data need to be clarified,and different branches pay different attention to the same data,which may further increase the risk of data leakage.

    FL can protect data security and ensure the consistency of data rights and responsibilities as a kind of privacy-protected distributed machine learning.It is a secure collaborative computing for data confirmation.On the other hand,it can eliminate data transmission links and reduce the energy consumption of collaborative computing.It is a kind of green collaborative computing.FL can ensure that the local data owned by the participants stay within the control of the participants and conduct joint model training.FL can better solve the problems of data islands,data privacy,etc.At present,FL has been widely used in various fields.

    However,the existing FL primarily relies on the parameter server to generate or update the global model parameters,which is a typical centralized architecture.There are problems such as single-point failure,privacy disclosure,performance bottlenecks,etc.The credibility of the global model depends on the parameter server and is subject to the centralized credit model.Traditional FL relies on a trusted centralized parameter server,in which multiple participants cooperate to train a global model under the condition that their data does not go out of the local area.The server collects local model updates,performs update aggregation,maintains global model updates,and other centralized operations.The entire training process is vulnerable to server failures.Malicious parameter servers can even poison the model,generate inaccurate global updates,and then distort all local updates,thus making the entire collaborative training process error.In addition,some studies have shown that unencrypted intermediate parameters can be used to infer important information in training data,and the private data of participants are exposed.Therefore,in the process of model training,it is particularly important to adopt appropriate encryption schemes for local model updates and maintain the global model on distributed nodes.As a distributed shared general ledger jointly maintained by multiple parties,the blockchain realizes the establishment of the trust relationship between participants without relying on the credit endorsement of a trusted third party through the combined innovation of multiple technologies such as distributed ledger technology,cryptographic algorithms,peer-to-peer communication,consensus mechanism,smart contracts,etc.It can be used to replace the parameter server in FL and store relevant information in the model training process.

    In the peer-to-peer cooperative computing scenario,the traditional centralized FL has the disadvantages of low communication efficiency,slow aggregation speed,insecure aggregation,and untrustworthy aggregation.First,the aggregation node needs to consume a large amount of computing and communication resources.However,in the peer entities,the benefits are equal,and the entities are unwilling to take responsibility for aggregation tasks and bear redundant responsibilities and resource consumption.Secondly,in the process of aggregation,there are malicious attacks.On the one hand,aggregation nodes can maliciously reduce the aggregation weight of a cooperative subject so that the global model deviates from its local model,and targeted attacks can be achieved.On the other hand,the aggregation node can retain the correct model and distribute the tampered model to achieve a global attack.Finally,the global model trained by the aggregation node has a weak prediction effect for a single agent and cannot be personalized.In practical applications,due to data heterogeneity and distrust/nonexistence of the central server,different federations can only work together sometimes.

    To sum up,this paper proposes a decentralized asynchronous federated distillation learning algorithm.Through circular knowledge distillation,the personalized model of each federation is obtained without a central server.Then the trained model is uploaded to the blockchain for other federations on the chain to download to the local for training.

    Our contributions are as follows:

    a) We propose an asynchronous FL distillation algorithm that integrates blockchain and federated learning,which can accumulate public information from different federations without violating privacy and implement personalized models for each federation through adaptive knowledge distillation.

    b) Asynchronous consensus is combined with FL to improve the efficiency of model uplink.

    c) By comparing the FedAvg algorithm,BFLC [2] algorithm,and the DHFDL algorithm proposed in this paper,it can be seen that the DHFDL training of asynchronous uplink aggregation of models through asynchronous consensus takes the shortest time and has the highest efficiency.

    2 Related Work

    2.1 Federated Learning

    FL[3]was launched by Google in 2016 to solve the problems of data privacy and data islands in AI.Its essence is that the central server pushes the global model to multiple data parties participating in FL and trains the model in multiple data parties.The data party transmits the updates of local training to the central server,which aggregates these updates to generate a new global model and then pushes it to the data party.The architecture of FL is shown in Fig.1.

    Figure 1:General FL architecture

    To fully use the data of different independent clients while protecting data privacy and security,Google has proposed the first FL algorithm FedAvg to summarize customer information.FedAvg trains the machine learning model by aggregating data from distributed mobile phones and exchanging model parameters rather than directly exchanging data.FedAvg can well solve the problem of data islands in many applications.However,simple FedAvg cannot meet complex reality scenarios,and when meeting the statistical data heterogeneity,FedAvg may converge slowly and generate many communication costs.In addition,because only the shared global model is obtained,the model may degenerate when making predictions in the personalized client.Reference [4] combined three traditional adaptive technologies into the federated model: fine-tuning,multi-task learning,and knowledge distillation.Reference [5] attempted to deal with feature changes between clients by retaining local batch processing normalization parameters,which can represent some specific data distribution.Reference [6] proposed introducing the knowledge distillation method into FL so that FL can achieve better results on the local data distribution might be not Independent and Identically Distributed (Non-IID).Reference [7] evaluated FL’s model accuracy and stability under the Non-IID dataset.Reference[8]proposed an open research library that allows researchers to compare the performance of FL algorithms fairly.In addition,the research library also promotes the research of various FL algorithms through flexible and general Application Programming Interface(API)design.Reference[9]proposed a sustainable user incentive mechanism in FL,which dynamically distributes the given budget among the data owners in the federation,including the received revenue and the waiting time for receiving revenue,by maximizing the collective effect and the way perceived below,to minimize the inequality between data owners.Reference[10]proposed a new problem called federated unsupervised representational learning to use unlabeled data distributed in different data parties.The meaning of this problem is to use unsupervised methods to learn data distributed in each node while protecting user data privacy.At the same time,a new method based on dictionary and alignment is proposed to realize unsupervised representation learning.

    The purpose of FL is to train in-depth learning models on the premise of ensuring user privacy.However,the transmission of model updates involved in general FL has been proved in Reference[11]that gradients can disclose data,so we can see that there is still a risk of data privacy disclosure in general FL.Therefore,the research on FL security is also a valuable direction.The following summarizes some research results on FL security.Reference[12]proposed introducing a differential privacy algorithm into FL to construct false data sets with similar data distribution to real data sets to improve the security of real data privacy.Reference[13]proposed to apply secure multi-party computing(SMC)and differential privacy at the same time and achieve a balance between them so that FL can achieve better reasoning performance while achieving the security brought by differential privacy.Reference[14]proposed an algorithm combining secret sharing and Tok-K gradient selection,which balances the protection of user privacy and the reduction of user communication overhead,reduces communication overhead while ensuring user privacy and data security,and improves model training efficiency.

    2.2 Knowledge Distillation

    Knowledge distillation is a technique that extracts valuable insights from complex models and condenses them into a singular,streamlined model,thereby enabling its deployment in real-world applications.Knowledge distillation [15] is a knowledge transfer and model compression algorithm proposed by Geoffrey Hinton et al.in 2015.For a specific character,through the use of a knowledge distillation algorithm,the information of an ideally trained teacher network containing more knowledge can be transferred to a smaller untrained student network.

    In this paper,the loss function Lstudentof the student network can be defined as:

    LCE is the cross entropy loss function,LKL is the Kullback Leibler (KL) divergence,pstudentand pteacherare the outputs of the network after the softmax activation function,z is the output logits of the neural network,and T is the temperature,which is generally set as 1.The primary purpose of temperature is to reduce the loss of knowledge contained in the small probability results caused by excessive probability differences.KL divergence can measure the difference between the two models.The larger the KL divergence,the more significant the distribution difference between the models,and the smaller the KL divergence,the smaller the distribution difference between the two models.The formula of KL divergence is:

    where P(x)and Q(x)respectively represent the output of different networks after the softmax activation function.

    2.3 Federated Learning Based on Blockchain

    Reference[16]proposed a trusted sharing mechanism that combines blockchain and FL to achieve data sharing,protecting private data and ensuring trust in the sharing process.Reference[2]proposed an FL framework based on blockchain,using committee consensus BFLC.This framework uses blockchain to store global and local models to ensure the security of the FL process and uses special committee consensus to reduce malicious attacks.Reference [17] designed a blockchain-based FL architecture that includes multiple mining machines,using blockchains to coordinate f FL tasks and store global models.The process is as follows:nodes download the global model from the associated mining machine,train it,and then upload the trained local model as a transaction to the associated mining machine.The miner confirms the validity of the uploaded transaction,verifies the accuracy of the model,and stores the confirmed transaction in the candidate block of the miner.Once the candidate block has collected enough transactions or waited for a while,all miners will enter the consensus stage together,and the winner of PoW will publish its candidate block on the blockchain.In addition,miners can allocate rewards to encourage devices to participate in FL when they publish blocks on the blockchain.The recently emerged directed acyclic graph-based FL framework[18]builds an asynchronous FL system based on asynchronous bookkeeping of directed acyclic graphs to solve the device asynchrony problem in FL.

    Reference[19]proposed to enhance the verifiable and auditable of the FL training process through blockchain,but the simultaneous up-chaining of the models through the committee validation model is less efficient.Reference[20]proposed data sharing based on blockchain and zero-knowledge proof,However,it is not suitable for computing and data sharing of complex models.Reference[21]proposed a verifiable query layer for guaranteeing data trustworthiness,but this paper’s multi-node model verification mechanism is more suitable for FL.

    3 Method

    3.1 Problem Analysis

    In an ideal situation,the current decentralized FL solutions have been proven to work well.However,in the actual scene,problems such as model training speed and federated learning security still pose a huge challenge to the existing decentralized FL algorithm.For the model training speed,synchronous model aggregation slows down the model updating speed when the equipment performance of multiple participants is different.In terms of the security of federated learning,decentralized,federated learning faces not only the data poisoning of malicious nodes but also the problem of information tampering.Malicious nodes undermine the security of FL by tampering with the model parameters or gradient of communication.Different nodes have different FL computing and communication resources.In conventional centralized synchronous FL systems,a single node needs to wait for other nodes to finish their tasks before moving on to the next round.Only after completing their training tasks can they enter the next round together.However,if a node is turned off during training,it may completely invalidate a round of FL.

    Blockchain is a decentralized asynchronous data storage system.All transactions verified in the blockchain will be permanently stored in the blockchain blocks and cannot be tampered with.In addition,the blockchain uses a consensus algorithm to verify transactions,which can effectively prevent malicious nodes from tampering with transaction information.In decentralized FL,blockchain asynchronous consensus can be used to accelerate the model aggregation efficiency,taking the model training speed as an example.Therefore,this paper introduces blockchain technology based on a decentralized FL framework so that network communication load,FL security,and other indicators can reach better standards.

    3.2 Architecture Design

    Considering the absence of a central server among different federal bodies,the key to enabling them to share knowledge without the involvement of other administrators and without directly exchanging data is crucial.The objective of a blockchain-based FL architecture is to accumulate public knowledge through knowledge distillation while preserving data privacy and security and storing personalized information.As shown in Fig.2,decentralized heterogeneous federated distillation learning architecture is divided into the bottom algorithm application layer,the blockchain layer for trustworthy model broadcasting and endorsement,and the asynchronous federated distillation learning part for model training.

    On the blockchain,we design two different types of blocks to store public models and local models.FL training relies only on the latest model blocks,and historical block storage is used for fault fallback and block validation.The data storage structure on the blockchain is shown in Fig.3.

    The public model block is created in the common knowledge accumulation phase.In the common knowledge accumulation phase,nodes use local data for model training and then access the blockchain to obtain the latest public model.The public model acts as a teacher to enhance the local model by knowledge distillation,and the local model that completes knowledge distillation is chained as a new public model block.When the public model blocks with the same TeacherID accumulate to a certain number,the model aggregation smart contract is triggered to generate a new public model block.The public model block includes the block header,TeacherID,this model ID,model evaluation score,IsPublic,IsAggreation,and model parameters.

    Figure 3:Data storage structure on blockchain

    Local model blocks are created in the personalization phase.When the accuracy of the public model reaches a threshold,the subsequent participating nodes trigger the personalization training smart contract and enter the personalization training phase.The public model we obtained in the previous phase is used as a pre-trained model input in the personalization phase to improve the node’s local model,and no new public model blocks are generated.The participating nodes in this phase first perform local model training,then download the public model and use the public model as a teacher to fine-tune the local model by distillation learning.The fine-tuned local model is up-linked as a new personalized model block.Nodes download each new personalized model block,verify the personalized model uploaded by other nodes and try to use it for fine-tuning the local model,and the fine-tuned personalized model is also uploaded to the blockchain for knowledge sharing.The personalized model block includes the block header,TeacherID,ID of this model,IsPublic,UserID of the user to which the model belongs,model evaluation score,and model parameters.

    3.3 Model Asynchronous Uplink Mechanism Based on Honeybadger Consensus

    The model uplink consensus mechanism based on honeybadger consensus is shown in Fig.4:

    WhereCnparticipates in the FL training,that is,the load forecasting business of the virtual power plant;LMnis a local model for participants to train based on local data,namely personalized training.TheSHnmodule is used for the trusted broadcast of each local model.Theendorsenmodule is used for model endorsement.When a local model collects a certain number of endorsements,it performs the model up chain step through the smart contract.

    3.3.1 SHn Module

    As an anonymous transmission channel based on secret sharing,theSHnmodule can obfuscate the uploader’s address of the model,preventing malicious nodes from reproducing the data and launching targeted attacks against the model owner by knowing the model’s source in advance.

    After construction,theSHnmodule is used for on-chain updates of the FL local update block.The local update block serves as a black box input,and the on-chain verification nodes cannot access or modify the information inside the block before completing the anonymous transmission.

    TheSHnmodule satisfies the following properties:

    (Validity)If an honest node outputs integrity verification set V against the received locally updated model integrity,then|V|≥N-fand v contain the verification of at leastN-2fhonest nodes.

    (Consensus)If one honest node outputs integrity verification setV,then other nodes should also outputV.

    (Integrity)IfN-fcorrect nodes receive input,then all nodes generate an output.

    Nmeans the number of participating nodes is N,and f means the number of malicious nodes is f.

    The anonymous transmission channelSHnis implemented based on secret sharing.Firstly,the node needs to generate a public key andNprivate keysSK_iaccording to the node ID.Then,the public key is used to encrypt the model and distribute the encrypted model and public key to other nodes.For the encrypted model,multiple nodes need to cooperate to complete decryption.Once thef+1honest node decrypts the ciphertext,the encrypted model will be restored to a usable model.Unless an honest node leaks the model after decryption,the attacker cannot complete the decryption of the model ciphertext.TheSHnprocess is as follows:

    SH.setup()->PK,{SK_i},generates the cryptographic public key PK for the local model update and SK_i,a set of private keys for decrypting the cryptographic model.

    SH.Enc(PK,m)->C,encrypts the local model update m using the public key and generates the encrypted model C.

    SH.DecShare(SK_i,C)distributes the encrypted model and key to each node.

    SH.Dec(PK,C,{i,SK_i})->m,aggregate{i,SK_i}from at leastf+1nodes to obtain the private key SK corresponding to PK,and use SK to decrypt each node to obtain a usable local update model.

    3.3.2 Endorsen Module

    Theendorsenmodule is used to verify the update of the FL local model.All nodes verify the model passed bySHnand give the verification vote.TheNconcurrent instances of binary Byzantine are used for the counterpoint vector whereb=1 indicates that the node agrees to chain the model.

    Theendorsenmodule satisfies the following properties:

    (Consensus)If any honest node endorses the model output with agreement endorsement b,then every honest node outputs agreement endorsement b.

    (Termination)If all honest nodes receive the input model,then every honest node outputs a 1-bit value indicating whether it agrees to endorse the model or not.

    (Validity)If any honest node outputs b,then at least one honest node accepts b as input.

    The validity property implies consistency:if all correct nodes receive the same input value b,then b must be a deterministic value.On the other hand,if two nodes receive different inputs at any point,the adversary may force the decision of one of the values before the remaining nodes receive the input.

    3.4 Asynchronous Federated Distillation Learning

    The training is divided into a common knowledge accumulation stage and a personalization stage.Specifically,in the common knowledge accumulation stage,each federation on the blockchain is regarded as a meta-distribution,and the knowledge of each federation is aggregated cyclically.After the knowledge accumulation is completed,the model is uploaded to the blockchain so that other federations on the blockchain can perform personalized training.The common knowledge accumulation stage lasts for several rounds to ensure that the public knowledge of each federation is fully extracted.In the personalization stage,the federation in the blockchain downloads the trained model from the chain to the local for guidance training,and the personalization stage can also be trained in the same way.Since the public knowledge has been accumulated,the local training Before sending the public knowledge model to the next federation.Both stages are based on knowledge distillation.

    In the first stage,we divide the specific steps into four steps.In short,the four steps are operated as follows:

    a) Train:Using the local dataset to train the local model as a student model

    b) Download:Download the on-chain model for distillation

    c) Distill:Using knowledge distillation to enhance the local model to get the pre-on chain model

    d) Upload:Upload the pre-on chain model to the blockchain

    The detailed procedures are as follows:

    1.Train.In this step,each client updates its model with its local dataset.The model parameters are updated as follows:

    2.Download.In this step,each client downloads an on-chain modelwgfor distillation.

    3.Distill.

    Based on the model learned in the previous step,each client predicts the local logit,which is the label for data samples in the local dataset.More specifically,given the model parameter,each client predicts local logitsfor j ∈ρas follows:

    whereη2is the learning rate in the proposed distillation procedure.

    4.Upload.In this step,each client uploads the pre-on-chain model to the blockchain.

    The second stage is the personalized training stage.Since there is no central server for the entire model,we must obtain the personalized model in the same order as the common knowledge accumulation stage.In the first stage,we obtain the public model f,which contains enough common knowledge.To prevent the common knowledge from being lost,the public model f is transferred to the next federation before local,personalized training.Other federations on the blockchain can download other nodes with trained models for local training.Since public knowledge has been accumulated,local training is optional.The process of the second stage is shown in Fig.5.When the public model performs poorly on the local validation data,the personalization phase modifies it very little;when the public model’s performance on the local validation data is acceptable,the personalization phase modifies it.Mostly modified for better performance.

    4 Experiment

    Based on the algorithm proposed in this paper,the load forecasting and analysis simulation experiments on the demand side of the VPP show that the forecasting model can realize the accurate prediction of the demand side compliance and support the VPP to achieve precise regulation and control of layers and partitions.The models are written in Python 3.9.10 and Pytorch 1.11.0 and executed on a Geforce RTX 3080Ti GPU.

    In the load forecasting experiment,the dataset contains three types of enterprises the real estate industry,the manufacturing industry,and the catering industry.Each industry includes sample data from 100 companies for 36 consecutive months.The characteristics of each sample are enterprise water consumption,enterprise gas consumption,daily maximum temperature,daily minimum temperature,daily average temperature,daily rainfall,and humidity,and the label is enterprise energy used.

    Based on the DHFDL proposed in this paper,the federated load forecasting model is constructed and trained for the data of the three industries.Fig.6 is the comparison between the prediction effect and the actual value.It can be seen from the figure that the algorithm proposed in this paper has a better prediction effect.

    To prove the effectiveness of blockchain-based decentralized heterogeneous federated distillation learning,this paper conducts experiments on the real-world federated dataset FEMNIST.The dataset contains 805,263 samples,made by 3,550 handwritten character image classification task users,including 62 different categories(10 digits,26 lowercase letters,26 uppercase letters),and the number of local datasets is unbalanced.The distributions are not independent.After randomly selecting active nodes,we perform local training and aggregation via memory.

    Malicious blockchain nodes participating in FL training will generate harmful local models for malicious attacks.If they participate in model aggregation,the performance of the global model will be significantly reduced.In this section,we simulate malicious node attacks and set different malicious node ratios to demonstrate the impact of different malicious node ratios among participating nodes on the performance of FedAvg,BFLC,and the DHFDL model proposed in this paper.This paper assumes that the malicious attack mode is to randomly perturb the local training model to generate an unusable model.FedAvg performs no defenses and aggregates all local model updates.BFLC relies on committee consensus to resist malicious attacks.During the training process,each model update will get a score from the committee,and the model with a higher score will be selected for aggregation.In the experiment,we assume that malicious nodes are colluding,that is,members of the malicious committee will give random high scores to malicious updates and select nodes with model evaluation scores in the top 20%of each round of training as the committee for the next round.The participating nodes of DHFDL train the local model and select the on-chain model with a model accuracy rate to carry out knowledge distillation in each round of updates to improve the effectiveness of the local model.As shown in Fig.7,DHFDL can resist a higher proportion of malicious codes than the comparative methods.This shows the effectiveness of DHFDL with the help of knowledge distillation.

    Figure 7:Performance of algorithms under malicious attacks

    By combining the asynchronous consensus with the training process of the FL model,the training efficiency of decentralized FL can be improved.As shown in Fig.8,this paper conducts FL training by setting the number of training nodes participating in FL and counts FedAvg and BFLC.Compared with the training time of each round of the three algorithms of DHFDL,it can be seen that the time required for each round of training of DHFDL,which realizes model asynchronous uplink aggregation through asynchronous consensus,is the lowest.At the same time,as the number of participating nodes increases,the training of the three algorithms.The time required increases accordingly,but the time required for each training round of the DHFDL algorithm is still the lowest,which shows that DHFDL is efficient with the help of the asynchronous model on-chain.

    Figure 8:Performance of algorithms in different numbers of participating nodes

    Fig.9 describes the on-chain storage cost comparison of the algorithm.Ten training nodes are simulated to conduct FEMINIST classification model training based on the Convolutional Neural Network (CNN) model,and the on-chain storage cost of the model based on the BFLC and DHFDL algorithms is recorded.DHFDL algorithm that realizes the model chain aggregation through asynchronous consensus requires less on-chain storage overhead to achieve the same accuracy.Meanwhile,with the improvement of accuracy,the two algorithms require more on-chain storage space,but the DHFDL algorithm still has lower storage overhead than the BFLC algorithm.

    Figure 9:Storage performance of the algorithm

    5 Conclusion

    In this paper,we propose the DHFDL algorithm,Decentralized Heterogeneous Federated Distillation Learning,to effectively predict the load of virtual power plants for better grid scheduling.DHFDL does not need a central server to organize the federation for training.The public model is extracted through distillation learning,and the model is uploaded to the blockchain.The federation nodes on the blockchain can download the trained models of other federation nodes to guide personalization training to get a better model.The introduction of blockchain technology enables indicators such as network communication load and FL security to reach better standards.By simulating malicious node attacks and comparing the FedAvg algorithm and the BFLC algorithm,it can be seen that the DHFDL algorithm proposed in this paper can resist a higher proportion of malicious codes.From the comparative experimental results,it can be seen that the combination of asynchronous consensus and FL model training process improves the training efficiency of decentralized FL.

    Acknowledgement:I would first like to thank Jiangsu Provincial Electric Power Corporation for providing the experimental environment and necessary equipment and conditions for this research.The strong support from the institution made the experiment possible.I would particularly like to acknowledge my team members,for their wonderful collaboration and patient support.Finally,I could not have completed this dissertation without the support of my friends,who provided stimulating discussions as well as happy distractions to rest my mind outside of my research.

    Funding Statement:This work was supported by the Research and application of Power Business Data Security and Trusted Collaborative Sharing Technology Based on Blockchain and Multi-Party Security Computing(J2022057).

    Author Contributions:Study conception and design:Hong Zhu;data collection:Lisha Gao;analysis and interpretation of results:Yitian Sha,Shuo Han;draft manuscript preparation:Nan Xiang,Yue Wu.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:Data not available due to ethical restrictions.Due to the nature of this research,participants of this study did not agree for their data to be shared publicly,so supporting data is not available.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    男人和女人高潮做爰伦理| 一进一出抽搐gif免费好疼| 成人三级黄色视频| 欧美性感艳星| 色视频www国产| 老女人水多毛片| 国产毛片a区久久久久| 国产乱人视频| 国产一区二区三区在线臀色熟女| 亚洲人与动物交配视频| 久久久精品大字幕| 每晚都被弄得嗷嗷叫到高潮| 亚洲av免费在线观看| 国产高清有码在线观看视频| 中文字幕免费在线视频6| 国产伦一二天堂av在线观看| 少妇人妻一区二区三区视频| 观看美女的网站| 很黄的视频免费| 99久久精品一区二区三区| 色哟哟哟哟哟哟| 宅男免费午夜| 好男人在线观看高清免费视频| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 1000部很黄的大片| 欧美黄色片欧美黄色片| 精品乱码久久久久久99久播| 丰满乱子伦码专区| 黄色女人牲交| 欧美高清性xxxxhd video| 脱女人内裤的视频| 两性午夜刺激爽爽歪歪视频在线观看| 国产亚洲精品av在线| 亚洲国产高清在线一区二区三| 一进一出抽搐gif免费好疼| 国产真实伦视频高清在线观看 | 搡老岳熟女国产| 丰满的人妻完整版| 3wmmmm亚洲av在线观看| 国产不卡一卡二| 色在线成人网| 久久久久免费精品人妻一区二区| 观看免费一级毛片| 久久久久九九精品影院| 成人午夜高清在线视频| 丰满人妻一区二区三区视频av| 色综合亚洲欧美另类图片| 色综合亚洲欧美另类图片| 九色成人免费人妻av| 波多野结衣高清作品| 亚洲内射少妇av| 欧美性猛交黑人性爽| 制服丝袜大香蕉在线| 一区二区三区四区激情视频 | 热99re8久久精品国产| 在线看三级毛片| 色av中文字幕| 中文字幕人妻熟人妻熟丝袜美| 18+在线观看网站| 99久久九九国产精品国产免费| 精品人妻视频免费看| 乱人视频在线观看| 性欧美人与动物交配| 成人永久免费在线观看视频| 午夜福利在线在线| 在线免费观看的www视频| 天美传媒精品一区二区| 精品一区二区三区视频在线| 中出人妻视频一区二区| 中出人妻视频一区二区| 日本在线视频免费播放| 天堂网av新在线| 国产精品亚洲美女久久久| 757午夜福利合集在线观看| 日韩中文字幕欧美一区二区| 久久久久久九九精品二区国产| www日本黄色视频网| 久久久成人免费电影| 精品久久久久久久久av| 嫩草影院精品99| 嫁个100分男人电影在线观看| 老女人水多毛片| 国产午夜福利久久久久久| 国产精品一区二区三区四区久久| 能在线免费观看的黄片| 国产精品乱码一区二三区的特点| 欧美午夜高清在线| 嫁个100分男人电影在线观看| 99热这里只有是精品50| 人人妻,人人澡人人爽秒播| 久久久精品欧美日韩精品| 欧美色欧美亚洲另类二区| 国产熟女xx| 亚洲欧美日韩无卡精品| av视频在线观看入口| 亚洲avbb在线观看| 欧美三级亚洲精品| a级毛片免费高清观看在线播放| 亚洲人成伊人成综合网2020| 欧美不卡视频在线免费观看| 欧美黑人欧美精品刺激| 亚洲成a人片在线一区二区| 国产午夜精品论理片| 成人三级黄色视频| 中国美女看黄片| 国产 一区 欧美 日韩| 亚洲一区二区三区色噜噜| 床上黄色一级片| 午夜激情福利司机影院| 成人高潮视频无遮挡免费网站| 国产不卡一卡二| 久久午夜福利片| 一本精品99久久精品77| 成人国产一区最新在线观看| 日日干狠狠操夜夜爽| 国产久久久一区二区三区| 中文亚洲av片在线观看爽| 亚洲18禁久久av| 99在线视频只有这里精品首页| 成人欧美大片| 97人妻精品一区二区三区麻豆| 嫩草影院新地址| 日本撒尿小便嘘嘘汇集6| 男插女下体视频免费在线播放| 51午夜福利影视在线观看| 国产日本99.免费观看| 麻豆av噜噜一区二区三区| xxxwww97欧美| 成人亚洲精品av一区二区| 午夜久久久久精精品| 亚洲成av人片在线播放无| 亚洲av二区三区四区| 亚洲一区二区三区色噜噜| 久久久久久久久大av| 免费观看人在逋| 桃红色精品国产亚洲av| 97超级碰碰碰精品色视频在线观看| 精品人妻一区二区三区麻豆 | 欧美在线黄色| 亚洲内射少妇av| 毛片一级片免费看久久久久 | 91在线精品国自产拍蜜月| 亚洲男人的天堂狠狠| 国产精品亚洲一级av第二区| 国产精品爽爽va在线观看网站| 又爽又黄无遮挡网站| 亚洲av二区三区四区| 男女床上黄色一级片免费看| 可以在线观看的亚洲视频| 日日摸夜夜添夜夜添av毛片 | 99在线人妻在线中文字幕| 亚洲五月天丁香| 天堂网av新在线| 日本五十路高清| 欧美zozozo另类| 久久人妻av系列| av福利片在线观看| 精品久久国产蜜桃| 99热这里只有精品一区| 国内精品美女久久久久久| 我要看日韩黄色一级片| 亚洲第一电影网av| 波多野结衣高清无吗| 两个人视频免费观看高清| 午夜久久久久精精品| 日本免费一区二区三区高清不卡| 啪啪无遮挡十八禁网站| 亚洲av中文字字幕乱码综合| 自拍偷自拍亚洲精品老妇| 久久天躁狠狠躁夜夜2o2o| 夜夜躁狠狠躁天天躁| 亚洲真实伦在线观看| 啪啪无遮挡十八禁网站| 色综合婷婷激情| 少妇被粗大猛烈的视频| 午夜激情欧美在线| 九九在线视频观看精品| 精品免费久久久久久久清纯| 欧美潮喷喷水| 国产蜜桃级精品一区二区三区| 精品一区二区三区视频在线| 国产精品1区2区在线观看.| 精品久久久久久久久av| 色av中文字幕| 久久久久亚洲av毛片大全| 男女视频在线观看网站免费| 色综合婷婷激情| 国产高清激情床上av| 精品一区二区三区视频在线| 国产精品1区2区在线观看.| 国产精品三级大全| 99久久成人亚洲精品观看| 久久热精品热| 18美女黄网站色大片免费观看| 可以在线观看毛片的网站| 亚洲国产精品sss在线观看| 禁无遮挡网站| 色5月婷婷丁香| 宅男免费午夜| 亚洲欧美精品综合久久99| 欧美三级亚洲精品| 欧美日韩乱码在线| 日日摸夜夜添夜夜添小说| 亚洲精品成人久久久久久| 一本久久中文字幕| 757午夜福利合集在线观看| 欧美精品国产亚洲| 特大巨黑吊av在线直播| 欧美不卡视频在线免费观看| 少妇人妻一区二区三区视频| 欧美性猛交╳xxx乱大交人| 男人和女人高潮做爰伦理| 男女那种视频在线观看| 在线天堂最新版资源| 成人三级黄色视频| 亚洲精品粉嫩美女一区| 欧美中文日本在线观看视频| 国产精品综合久久久久久久免费| 91在线精品国自产拍蜜月| 中文字幕久久专区| 久久精品国产亚洲av香蕉五月| 亚洲国产色片| 99视频精品全部免费 在线| 黄色视频,在线免费观看| 高清在线国产一区| 亚洲国产精品sss在线观看| 久久午夜亚洲精品久久| 亚洲欧美日韩东京热| АⅤ资源中文在线天堂| 色综合欧美亚洲国产小说| 国产高清视频在线播放一区| 俺也久久电影网| 久久久久免费精品人妻一区二区| 日韩成人在线观看一区二区三区| 婷婷色综合大香蕉| 少妇高潮的动态图| 村上凉子中文字幕在线| 舔av片在线| 成熟少妇高潮喷水视频| 国产亚洲精品久久久久久毛片| 一a级毛片在线观看| 有码 亚洲区| 欧美xxxx性猛交bbbb| 最近最新中文字幕大全电影3| 亚洲国产精品合色在线| 久久久久久九九精品二区国产| 久久这里只有精品中国| 我要看日韩黄色一级片| 免费在线观看成人毛片| 午夜视频国产福利| 三级国产精品欧美在线观看| 五月玫瑰六月丁香| 国产人妻一区二区三区在| 一本精品99久久精品77| 欧美日韩综合久久久久久 | 国产欧美日韩精品一区二区| 国产一级毛片七仙女欲春2| 一进一出好大好爽视频| 99久久无色码亚洲精品果冻| 如何舔出高潮| 免费人成在线观看视频色| 日韩欧美国产一区二区入口| 国产成人福利小说| 国产成人av教育| 欧美日韩瑟瑟在线播放| 亚洲国产精品999在线| 欧美成人性av电影在线观看| 欧美国产日韩亚洲一区| 亚洲 欧美 日韩 在线 免费| АⅤ资源中文在线天堂| 麻豆成人午夜福利视频| 99热这里只有是精品50| 大型黄色视频在线免费观看| 国产精品自产拍在线观看55亚洲| x7x7x7水蜜桃| 看黄色毛片网站| 又粗又爽又猛毛片免费看| 国产精品精品国产色婷婷| 一级av片app| 免费无遮挡裸体视频| 少妇丰满av| 国产精品久久视频播放| 日本三级黄在线观看| 变态另类成人亚洲欧美熟女| 高清日韩中文字幕在线| 国产成人影院久久av| 在线播放国产精品三级| 午夜福利在线观看免费完整高清在 | a级一级毛片免费在线观看| 精品人妻1区二区| 1024手机看黄色片| 亚洲精品在线美女| 俺也久久电影网| 欧美不卡视频在线免费观看| 69人妻影院| netflix在线观看网站| 婷婷精品国产亚洲av| 久久久久久大精品| 午夜视频国产福利| a级毛片免费高清观看在线播放| 精品一区二区三区av网在线观看| 免费av毛片视频| 午夜福利成人在线免费观看| 精华霜和精华液先用哪个| 国产成人av教育| 精品一区二区三区人妻视频| 高清毛片免费观看视频网站| 99视频精品全部免费 在线| 色播亚洲综合网| 搞女人的毛片| 99国产综合亚洲精品| 99久久精品一区二区三区| 亚洲av美国av| 亚洲avbb在线观看| 国产精品一区二区性色av| 午夜福利免费观看在线| 无人区码免费观看不卡| 国产精品不卡视频一区二区 | 亚洲欧美精品综合久久99| 婷婷精品国产亚洲av| 波多野结衣高清作品| 国产麻豆成人av免费视频| 中文资源天堂在线| 婷婷色综合大香蕉| 俺也久久电影网| 赤兔流量卡办理| 最新中文字幕久久久久| 日韩亚洲欧美综合| 国产精品99久久久久久久久| 国产免费av片在线观看野外av| 一级作爱视频免费观看| 好看av亚洲va欧美ⅴa在| 最新中文字幕久久久久| 久久午夜亚洲精品久久| 在线播放无遮挡| 国产免费av片在线观看野外av| 少妇人妻精品综合一区二区 | 久久久久九九精品影院| 亚洲电影在线观看av| 午夜福利在线观看吧| 看十八女毛片水多多多| 中亚洲国语对白在线视频| 精品福利观看| 啪啪无遮挡十八禁网站| 国产精品1区2区在线观看.| 免费人成视频x8x8入口观看| 免费av不卡在线播放| 欧美黑人巨大hd| 久久久久国产精品人妻aⅴ院| 搡老妇女老女人老熟妇| 成人高潮视频无遮挡免费网站| 一个人免费在线观看电影| 久久久久免费精品人妻一区二区| 天美传媒精品一区二区| 特级一级黄色大片| 日日干狠狠操夜夜爽| 亚洲最大成人中文| 毛片一级片免费看久久久久 | 一进一出抽搐gif免费好疼| 欧美色视频一区免费| 麻豆久久精品国产亚洲av| 在现免费观看毛片| 久久热精品热| 99久久精品国产亚洲精品| 久久久久亚洲av毛片大全| 一级黄片播放器| 欧美又色又爽又黄视频| 色视频www国产| 自拍偷自拍亚洲精品老妇| 两人在一起打扑克的视频| 男人舔奶头视频| 在线免费观看不下载黄p国产 | 亚洲av电影在线进入| 成人精品一区二区免费| 18禁在线播放成人免费| 欧美精品啪啪一区二区三区| 精品久久久久久久末码| 九九在线视频观看精品| 精品欧美国产一区二区三| 两个人的视频大全免费| 一级黄片播放器| 国产av在哪里看| 欧美黄色淫秽网站| 久久精品久久久久久噜噜老黄 | 在线观看免费视频日本深夜| 国产亚洲精品久久久com| 亚洲自拍偷在线| 日本撒尿小便嘘嘘汇集6| 欧美日韩黄片免| 欧美成人性av电影在线观看| 搞女人的毛片| 老鸭窝网址在线观看| 伦理电影大哥的女人| 老鸭窝网址在线观看| 久久国产精品影院| 一本一本综合久久| 又粗又爽又猛毛片免费看| 亚洲人成伊人成综合网2020| 精品人妻视频免费看| 亚洲欧美精品综合久久99| 男人舔女人下体高潮全视频| 乱码一卡2卡4卡精品| 欧美黑人欧美精品刺激| 亚洲av中文字字幕乱码综合| 日韩欧美免费精品| 国产白丝娇喘喷水9色精品| 一区二区三区免费毛片| 精品福利观看| 国产黄色小视频在线观看| 日本黄色片子视频| 久久午夜福利片| 免费人成在线观看视频色| 成人三级黄色视频| 好男人在线观看高清免费视频| 亚洲色图av天堂| 女生性感内裤真人,穿戴方法视频| 日本一二三区视频观看| 欧美激情国产日韩精品一区| 久久精品国产亚洲av香蕉五月| 97超级碰碰碰精品色视频在线观看| 欧美日韩黄片免| 欧美成人a在线观看| 国产伦在线观看视频一区| 亚洲精品粉嫩美女一区| 日韩欧美国产在线观看| 亚洲乱码一区二区免费版| 好男人在线观看高清免费视频| 小说图片视频综合网站| 超碰av人人做人人爽久久| 偷拍熟女少妇极品色| 久久久国产成人免费| av天堂中文字幕网| 日本a在线网址| 国产精品野战在线观看| 在线观看一区二区三区| 午夜福利视频1000在线观看| 国产伦在线观看视频一区| 哪里可以看免费的av片| 天堂影院成人在线观看| 3wmmmm亚洲av在线观看| 日本撒尿小便嘘嘘汇集6| 国产乱人视频| 成年人黄色毛片网站| av福利片在线观看| 性色av乱码一区二区三区2| av专区在线播放| 99热只有精品国产| 国产久久久一区二区三区| av国产免费在线观看| 久久99热6这里只有精品| 日本黄大片高清| 成人无遮挡网站| 天天一区二区日本电影三级| 国产一区二区三区在线臀色熟女| 欧美激情国产日韩精品一区| 久久久久国内视频| 大型黄色视频在线免费观看| 婷婷色综合大香蕉| 99热这里只有精品一区| 精品午夜福利在线看| 免费一级毛片在线播放高清视频| 欧美日韩乱码在线| 欧美区成人在线视频| 给我免费播放毛片高清在线观看| 亚洲色图av天堂| 一区二区三区四区激情视频 | 一级黄色大片毛片| 国内揄拍国产精品人妻在线| 一边摸一边抽搐一进一小说| 99热这里只有是精品50| 此物有八面人人有两片| 91在线精品国自产拍蜜月| 日本三级黄在线观看| 999久久久精品免费观看国产| 午夜福利免费观看在线| 色综合欧美亚洲国产小说| 在线天堂最新版资源| 国产一区二区激情短视频| 国产69精品久久久久777片| 国产乱人伦免费视频| 免费高清视频大片| 国产麻豆成人av免费视频| 日韩国内少妇激情av| 国产蜜桃级精品一区二区三区| 简卡轻食公司| 韩国av一区二区三区四区| 婷婷六月久久综合丁香| 99精品久久久久人妻精品| 欧美不卡视频在线免费观看| 亚洲熟妇熟女久久| 亚洲第一欧美日韩一区二区三区| 日本黄色视频三级网站网址| 18禁裸乳无遮挡免费网站照片| 亚洲av二区三区四区| 国产精品久久久久久人妻精品电影| 亚洲无线观看免费| 成年女人永久免费观看视频| 真人做人爱边吃奶动态| a级毛片免费高清观看在线播放| 窝窝影院91人妻| 亚洲内射少妇av| 99精品在免费线老司机午夜| 国产三级黄色录像| 中文字幕av在线有码专区| 欧美一区二区亚洲| 久久香蕉精品热| 99热6这里只有精品| 精品国产三级普通话版| av专区在线播放| 亚洲七黄色美女视频| 日本黄色视频三级网站网址| 国产亚洲欧美98| 免费av毛片视频| 国产在线男女| 性插视频无遮挡在线免费观看| 99久久久亚洲精品蜜臀av| 亚洲精品456在线播放app | 亚洲精品久久国产高清桃花| 久久精品人妻少妇| 欧美极品一区二区三区四区| 欧美黑人欧美精品刺激| 国产免费男女视频| 成年人黄色毛片网站| 亚洲自拍偷在线| 最近最新中文字幕大全电影3| 国产精品爽爽va在线观看网站| 欧美不卡视频在线免费观看| 久99久视频精品免费| eeuss影院久久| 国产色婷婷99| av视频在线观看入口| 免费在线观看影片大全网站| 中出人妻视频一区二区| 十八禁网站免费在线| 国产伦一二天堂av在线观看| 美女cb高潮喷水在线观看| 亚洲乱码一区二区免费版| 久久人妻av系列| 免费大片18禁| 欧美三级亚洲精品| 男人狂女人下面高潮的视频| 亚洲自偷自拍三级| 国产精品人妻久久久久久| 老鸭窝网址在线观看| 91午夜精品亚洲一区二区三区 | 三级男女做爰猛烈吃奶摸视频| 久久久久久国产a免费观看| 色尼玛亚洲综合影院| 亚洲av.av天堂| 黄片小视频在线播放| 欧美极品一区二区三区四区| 亚洲片人在线观看| 直男gayav资源| 亚洲av.av天堂| 99精品在免费线老司机午夜| 欧美性猛交黑人性爽| 免费在线观看日本一区| 国产成年人精品一区二区| 99热精品在线国产| 国产视频一区二区在线看| 亚洲av电影在线进入| 国产亚洲精品av在线| 久久久精品欧美日韩精品| 国产伦一二天堂av在线观看| 免费观看人在逋| 国内精品一区二区在线观看| 毛片一级片免费看久久久久 | 欧美xxxx性猛交bbbb| 免费看a级黄色片| 久久精品国产99精品国产亚洲性色| 日韩中字成人| 国产av一区在线观看免费| 亚洲,欧美精品.| 99精品在免费线老司机午夜| 国产国拍精品亚洲av在线观看| 中文字幕av在线有码专区| 免费观看人在逋| 国语自产精品视频在线第100页| 18禁裸乳无遮挡免费网站照片| 麻豆久久精品国产亚洲av| 美女 人体艺术 gogo| 村上凉子中文字幕在线| 特大巨黑吊av在线直播| 一进一出好大好爽视频| 国产探花在线观看一区二区| 国产一区二区三区在线臀色熟女| 国产精品久久久久久久电影| 免费看日本二区| 色哟哟·www| 看十八女毛片水多多多| 97超级碰碰碰精品色视频在线观看| 88av欧美| 成年版毛片免费区| 欧美色欧美亚洲另类二区| 国产高清视频在线观看网站| 欧美不卡视频在线免费观看| 国内精品久久久久久久电影| 亚洲自偷自拍三级| 白带黄色成豆腐渣| 99热6这里只有精品| 亚洲av美国av| 伦理电影大哥的女人| 日本撒尿小便嘘嘘汇集6| 天堂动漫精品| 亚洲av五月六月丁香网| 很黄的视频免费| 国产高清视频在线观看网站| h日本视频在线播放| 国产视频内射| 少妇的逼好多水| 精品不卡国产一区二区三区| 成年女人毛片免费观看观看9| 色哟哟·www| 日韩中文字幕欧美一区二区| 精品午夜福利视频在线观看一区|