The application of deep learning technology in various fields is becoming more and more extensive. However, the accuracy of deep learning models needs to rely on a large amount of training data. Due to data security and regulatory restrictions, many fields cannot centralize data for training, resulting in the phenomenon of " data silos" . In this regard, Google proposes federated learning that enables a large number of clients to jointly train models with trusted servers while the data is stored locally. At present, the research of federated learning mainly focuses on the issues of security and training efficiency. For the cross database federated learning scenario, this paper combines hierarchical federated learning with a privacy protection mechanism based on secure multiparty computation, and proposes a local multi node mask federated learning based on secret sharing. The algorithm Mask FL can improve the training efficiency while ensuring the security of federated learning. The main work includes: ① This paper proposes a local multi node cross database federated learning framework. The client uses local computing resources to generate multiple local nodes, and allocates data resources ac cording to the data division method based on computing power. Each client participates in global federated learning training on behalf of all local nodes, thus constituting a three level hierarchical federated learning. ② An adaptive mask encryption protocol based on secret sharing is proposed. On the basis of the local multinode federated learning framework proposed above, a reusable security param eter mask is generated by secret sharing. The local node adds a mask to the model in the uplink com munication of the training process to protect the model parameters. After the security hypothesis analy sis, it is proved that the algorithm can protect the data privacy security of the client. Experiments on general data sets show that the algorithm can maintain a relatively high accuracy while protecting pri vacy, and significantly reduces the number of global communication rounds. Compared with the tradi tional federated learning method, the training efficiency is improved by 30% , effectively improve the training speed of federated learning models.