Two things that come to mind immediately are
(1) With php you have access to zlib - it's entirely possible to get compression rates that would reduce 10meg of plain text to 2meg of compressed data. Especially since a csv file has so much repeated, redundant information - which would be as high as 66% if each element is 1 byte and each element is separated by a space and a comma.
-Just tried a 20.4kb file, it compressed to 2.91kb, (using 7z) set to use gzip, Ultra compression. A larger file could get a better rate since there's more data available to match pre-existing tokens with.
(2) Assuming that approach to be unfruitful, I'd simply chop the file into 2meg chunks, append a letter or number to the end of the file, upload the file parts then glue them back together on the receiving end. E.g file.csv.ae, file.csv.be, file.csv.ce, file.csv.de, file.csv.ee - you can keep them in the right order, you can also tell just from the filename if you've received all the parts and can stick it back together again. You can also do that just using javascript, should the architecture be changed or extended in the future.
[EDIT]
ChopFile.php
<?php
$fileName = 'Baltimore.csv';
$chunkSize = 2048;
$inputData = file_get_contents($fileName);
$inputLength = strlen($inputData);
$remaining = $inputLength;
$numFiles = floor($inputLength / $chunkSize) + 1;
$lastLetterInName = 65+($numFiles-1);
printf("Input File: %s\n<br>", $fileName);
printf("File Size: %d\n<br>", $inputLength);
printf("Chunk Size: %d bytes\n<br>", $chunkSize);
printf(" Num files: %d.\n<br>", $numFiles);
$curFileSize = $chunkSize;
for ($i=0; $i<$numFiles; $i++)
{
if ($remaining < $chunkSize)
$curFileSize = $remaining;
$curData = substr($inputData, ($i*$chunkSize), $curFileSize);
$curFileName = sprintf("%s.%c%c", $fileName, 65+$i, $lastLetterInName);
$curFile = fopen($curFileName, 'wb');
fwrite($curFile, $curData, $curFileSize);
fclose($curFile);
printf("file %2d. - (%d bytes) - %s\n<br>", $i, $curFileSize, $curFileName);
$remaining -= $chunkSize;
}
?>
Output:
Input File: Baltimore.csv
File Size: 20959
Chunk Size: 2048 bytes
Num files: 11.
file 0. - (2048 bytes) - Baltimore.csv.AK
file 1. - (2048 bytes) - Baltimore.csv.BK
file 2. - (2048 bytes) - Baltimore.csv.CK
file 3. - (2048 bytes) - Baltimore.csv.DK
file 4. - (2048 bytes) - Baltimore.csv.EK
file 5. - (2048 bytes) - Baltimore.csv.FK
file 6. - (2048 bytes) - Baltimore.csv.GK
file 7. - (2048 bytes) - Baltimore.csv.HK
file 8. - (2048 bytes) - Baltimore.csv.IK
file 9. - (2048 bytes) - Baltimore.csv.JK
file 10. - (479 bytes) - Baltimore.csv.KK
Plus, of course, 11 files in the folder the script resides in.