Data-driven methods for machinery fault diagnosis have been widely investigated and developed in recent years. Generally, the performance of the data-driven-based diagnosis models deeply depends on abundant high-quality data, which is unrealistic to a specific machine owner in the actual industry. Therefore, multiple data owners are motivated to unite to train a good global model with the aggregated big dataset. However, due to legal regulations on data privacy or potential conflict of interest, it is difficult to build a trusted and safe central database to aggregate the local datasets. To address this issue, a federated learning scheme with dynamic weighted averaging is proposed. In the training process, models are updated with local data. Then, the model parameters are uploaded to the server and aggregated, followed by distributing the updated global model back to the federated learning participants. In this case, data from different participants is not exposed during the training, which ensures data privacy. The proposed dynamic weighted averaging algorithm further considers the imbalance of distributed data, which can effectively reduce the influence of bad local data. The experiments show that the proposed federated learning scheme outperforms the traditional one and offers a promising perspective in privacy-preserving data mining.