This study investigated ChatGPT’s Python code generation capabilities with a quasi-experiment and a case study, incorporating quantitative and qualitative methods respectively. The quantitative analysis compared ChatGPT-generated code to human-written solutions in terms of accuracy, quality, and readability, while the qualitative study interviewed participants with varying levels of programming experience about the usability of ChatGPT for code generation. The findings revealed significant differences in quality between AI-generated and human-written solutions but maintained overall similarities in accuracy and readability. The interviewees reported that ChatGPT showed potential for generating simple programs but struggled with complex problems and iterative development, though most participants were optimistic about its future capabilities. Future research could involve larger samples, more programming languages, and increased problem complexities.