Texas Election Examiner Finds Evidence of Software Fraud on

An examination of the Election Systems & Software (ES&S) EVS voting system was conducted by Brian Mechler at the Texas Secretary of State Elections Division offices on August 21, 2020.  EVS (as well as other EVS versions) has multiple issues with its prescribed hash verification procedures. Hash verification is the process that is used to ensure that the software and/or firmware of a voting
system matches exactly with what was certified by the EAC. A hash is the output of a cryptographic function run on a file or program executable. If a file or program is changed in any way, it will produce
a different hash result. Hash verification is a critical component of acceptance testing to ensure the proper delivery of voting systems. In Election Advisory No. 2019-23, jurisdictions are directed to perform a complete system validation which includes the verification of hashes [17].

It was disclosed during the concurrent EVS exam that ES&S personnel have performed the hash verification process instead of their customers. Jurisdictions should always perform this process
themselves. To have the vendor perform a required component of acceptance testing creates, at best, a conflict of interest. The Secretary of State Elections Division has taken an action to work with ES&S
and their Texas customers to better define their roles and responsibilities with respect to acceptance testing and hash verification.

The hash verification process involves the creation of two USB thumb drives; one containing the system export data of the system to be verified and the other containing the verification scripts and
trusted hash file. A host separate from the EMS is booted using a live Ubuntu DVD. The live Ubuntu DVD allows the user to run the Linux OS from the DVD without altering the non-volatile memory of
the host computer. The export and scripting media are then mounted and a set of scripts are run to configure the user’s environment, compute hashes of the system export data, and compare those hashes
with the trusted hash file. While working through this process, I initially overlooked the instruction to add the trusted hash file to the scripting media. Despite the missing trusted hash file, the verification script erroneously reported that the exported hashes matched the trusted hashes.

It could be very easy for personnel performing hash verification to assume a good result when, in actuality, no hash comparisons were made. Within their scripts, ES&S should have performed explicit
checks on the existence of the two files being compared; failing loudly if either does not exist. A common open-source application, diff, is used to compare the hash files. In order to determine if they
match, ES&S only examines the text that diff writes to the standard output stream. In doing so they miss the error messages written to the standard error stream. In general, it is bad coding practice to
condition a critical decision on the written output of a 3rd party application. The reason is that the developer would have to know every possible output (intended or otherwise) in order to craft a reliable
conditional. A more robust way to check the result of the diff call would have been to query its exit status. The diff manual clearly defines the meaning of its exit status as [18], “0 if inputs are the same, 1 if different, 2 if trouble.”

It is my opinion that this bug (in addition to the overall process) indicates that ES&S has not developed their hash verification process with sufficient care, quality assurance, and concern for usability.
When jurisdictions run their hash verification, they should carefully examine the media they create for correctness and carefully monitor the output of the verification scripts to make sure no error messages
are printed along with text claiming a successful result.

JRJ Comment:  Examiner Mechler ran the test without the definitions inserted and the system should have loudly set off alarms. Instead, system reported everything looks just great.  Obviously, this is no bug. This is a pre-programmed test result to say everything looks good even when it most certainly is not good.



Showing 1 reaction

Please check your e-mail for a link to activate your account.

Calendar of Events