Graph contrastive learning based on augmentation strategies has recently demonstrated remarkable performance. Existing methods typically jointly leverage attribute and structural augmentations to generate graph views, learning data invariance information through contrasting sample pairs. However, this joint approach may deviate from the expectation of semantically similar before and after augmentation. The propagation of attribute information in graphs usually occurs through their structure, meaning that structural and attribute augmentations can interfere with each other and potentially distort the graph’s semantics. To address this, we propose a decoupled augmentation framework for graph contrastive learning, which eliminates the mutual interference between the two levels of augmentation while fully exploring graph information. Specifically, our framework employs separate encoders to learn data invariance under different augmentation levels, and it considers the positive gains generated between these levels. Experimental results on five public datasets show that the proposed method is more competitive than state-of-the-art approaches.