I'm looking for someone who already has a script like this or can create a perl script that does the following.
I will have two files: file1 and file2.
file1 is a list of email addresses needing to be processed.
file2 is a list of email addresses that are bad.
The first thing the script must do is remove duplicates from file1. Be sure that your script removes spaces, tabs, and other stuff before intial processing. You also should check that the file contains email addresses... For error checking, the duplicates must be put into a file called [url removed, login to view] or something like that.
After this runs, there should be no instances of duplicates in file1. Duplicates should be [url removed, login to view] and the processed file with no duplicates should be called file1.
After deduping, file1 needs to be compared with file2. File2 must also be checked, verified, and deduped (use [url removed, login to view] as the log file). Since file2 contains all bad email addresses, anything that shows up in file2 needs to be removed from file1 and logged to a file called bad.txt. File1 will then need that item removed from the file.
At the very end, file1 will only have email addresses, no dupes, and no email addresses found in file2.
I want this in perl at the moment, but eventually PHP so I can upload both files and it'll spit back the files for me to download and/or view...
Testing will take me 2-3 days.
My budget is no more than $20 for the perl script alone unless you can give me both PERL and PHP. Please PM for specs of the PHP script if you are interested. I may close bidding earlier. Thanks.