Accurate cloud cover assessment is crucial in several fields, such as weather forecasting, climate science, agriculture, or energy system planning, precipitation pattern forecasting and aiding in early detection of extreme weather events. Despite the crucial data that weather stations provide about sky cloud coverage, their measurements are geographically localized and thus lack spatial coverage. Meteorological satellites on the other hand offer great potential to address this limitation by continuously scanning large areas in short periods of time. This work proposes a novel approach for predicting cloud cover in global satellite images by leveraging ordinal point labels from ground-based weather stations, rather than relying on spatially resolved cloud masks, and demonstrates the effectiveness of this approach using a rank loss-based convolutional neural network of the EfficientNet family. The model is trained in transfer learning approach on a custom-collected dataset across selected regions in the continental USA. Using station measurements only, we achieve an $F_{1}$-score of up to 0.6 and a ranked-within-1-accuracy ranging from 93.5 to 99.1. Supplementing the data with labels created by visual inspection to correct for station-satellite mismatches improves scores to 0.75 and 98.4 to 100. The results imply significantly improved cloud cover assessment in regions without weather stations, extending the capabilities to monitor localized cloud patterns.