Gaussian processes (GPs), known for their flexibility as non-parametric models, have been widely used inpractice involving sensitive data (e.g., healthcare, finance) from multiple sources. With the challenge of dataisolation and the need for high-performance models, how to jointly develop privacy-preserving GP for multipleparties has emerged as a crucial topic, In this paper, we propose a new privacy-preserving GP algorithm, namelyPP-GP, which employs secret sharing ($$) techniques, Specifically, we introduce a new ss-based exponentiationoperation (PP-Exp) through confusion correction and an SS-based matrix inversion operation (PP-Ml) basedon Cholesky decomposition. However, the advantages of the GP come with a great computational burden andspace cost. To further enhance the efficiency, we propose an efficient split learning framework for privacy.preserving GP, named Split-GP, which demonstrably improves performance on large-scale data. We leave theDrivate data-related and SMPC-hostile computations (i.., random features) on data holders, and delegate therest of SMPC-friendly computations (i.e., low-rank approximation, model construction, and prediction) to semihonest servers. The resulting algorithm significantly reduces computational and communication costs comparedto Pp-GPp, making it well-suited for application to large-scale datasets. We provide a theoretical analysis interms of the correctness and security of the proposed Ss-based operations. Extensive experiments show thatour methods can achieve competitive performance and efficiency under the premise of preserving privacy.
