Segmenting medical images is critical to facilitating both patient diagnoses and quantitative research. A major limiting factor is the lack of labeled data, as obtaining expert annotations for each new set of imaging data or task can be expensive, labor intensive, and inconsistent among annotators. To address this, we present CUTS (Contrastive and Unsupervised Training for multi-granular medical image Segmentation), a fully unsupervised deep learning framework for medical image segmentation to better utilize the vast majority of imaging data that are not labeled or annotated. CUTS works by leveraging a novel two-stage approach. First, it produces an image-specific embedding map via intra-image contrastive loss and a local patch reconstruction objective. Second, these embeddings are partitioned at dynamic levels of granularity that correspond to the data topology. Ultimately, CUTS yields a series of coarse-to-fine-grained segmentations that highlight image features at various scales. We apply CUTS to retinal fundus images and two types of brain MRI images in order to delineate structures and patterns at different scales, providing distinct information relevant for clinicians. When evaluated against predefined anatomical masks at a given granularity, CUTS demonstrates improvements ranging from 10% to 200% on dice coefficient and Hausdorff distance compared to existing unsupervised methods. Further, CUTS shows performance on par with the latest Segment Anything Model which was pre-trained in a supervised fashion on 11 million images and 1.1 billion masks. In summary, with CUTS we demonstrate that medical image segmentation can be effectively solved without relying on large, labeled datasets or vast computational resources.