You can try changing the knob_min_available_space_ratio
to < 5% temporarily to help cluster recover out of current situation (and then change it back once you have been able to make sufficient space on disk). But I think using non-uniform disk sizes might continue to result in such issues.
AFAIK, same absolute amount of data is kept on each SS (irrespective of mount points each SS is on). There might have been few tweaks to this behavior as hinted here.