The others base on the percent of test scripts which can be run by qa automation.
Some teams evaluate the results of automation testing in connection with the amount of time spent on the test case execution manually and with qa automation taking into consideration the percentage of saved time.
One more way to determine the effect of the test automation is consideration of the code coverage. Identifying the code covered by qa automation and by the combination of automated testing and manual testing can be very helpful in the process of dead code, inactive code, or infrequently exercised code identification.
Considering of the test automation effectiveness by code coverage can be deceptive. The main problem with code coverage is that it doesn’t show the level of verification accuracy. Every automated test script has to check proper functionality.
Tracking of the qa automation status and history are also desirable if several persons are involved in the automated testing process so that every team member could follow the automation testing flow and see what types of cases are failing regularly.
It would be even better to have a automation service that could allow to display real-time results with the case name, description, and the software tester responsible for its execution and its status. So being able to track a history is very useful.