This study investigates human perception of action similarities, where we explore patterns or clusters of similarities (similar actions)? Moreover, if there are clusters, what are the salient visual features of the action that humans rely upon? Insights to these questions are helpful to devise computational models to create visual primitives for human motion segmentation and understanding. Such models would be advantageous in understanding a human-event scenario or a human-robot interaction setting, for the model would find the same action regularities salient, as would a human. To that extent, we study how humans judge similarities between different familiar human hand based actions. A total of nineteen commonly seen kitchen based hand actions (e.g., cutting bread, washing dish) are chosen as stimuli. Participants performed two psychophysical experiments, an action similarity judgment task (experiment 1) and action discrimination task (experiment 2). Human judgment data are analyzed to see for human similarity patterns. Additionally, similarity patterns from three different visual computing algorithms for motion understanding (low-level spatial and velocity features), are used to compare against the human judgment patterns, which shows some overlap. We discuss the similarity patterns as a way to model action-perception that builds on action primitives.