The present study aims at developing an automated scoring program for assessing Korean students’ English speaking ability. Building on the prototype English speaking automated scoring program developed in 2012, this study had the following three goals in mind: First, the performance of the prototype speaking automated scoring program needs to be improved by enhancing the recognition rate of the speech recognition system embedded in the automated scoring program. Second, the algorithm of the automated scoring program needs to be improved by refining the existing pool of scoring features. Third, the performance of the modified automated scoring program needs to be validated in order to explore the possibility of applying the program to the classroom. For this, two different types of algorithms, called the Maximum Entropy (ME) and Multiple Regression (MR), were used to apply the scoring features and analyze the performances of scoring models. The results showed that MR is slightly more efficient and reliable compared with ME. The automated scoring program still has a long way to go, but it certainly has a place in speaking assessments especially when considering its potential for not only an assessment tool but also a learning tool for students.