WARNING: FTP and Samba can show very different file sizes on large files.

Just a quick warning to anyone copying files from the Virtual folder via the windows protocol (Samba).

I recorded a programme on Saturday which is 10GB. Copied it to the virtual folder to decrypt it and then copied it from there via Windows explorer to my PC. This would be using the Samba Package.

The resulting file on my PC was 1.5GB. When I connected via the FTP link and downloaded it that way it was the correct 10GB. The first copy was 32 minutes long. The FTP copy was over 2hours.
 
I do agree with you that there is a problem with the samba package's large file support. I have been trying to fix this but haven't yet come up with a solution.
 
I do agree with you that there is a problem with the samba package's large file support. I have been trying to fix this but haven't yet come up with a solution.
Very strange, I compiled samba with large file support included. Normally the lack of LFS manifests itself with any file larger than 2GB. It seems that with samba this limit is 4GB. I have a large .ts file which FileZilla reports as 5,644,179,840 bytes, whilst the samba share sees it as 1,359,212,544 bytes. Difference exactly 4GB. Anything smaller than 4GB is reported correctly.
 
I've noticed this too, good to know that FTP works OK. Is there an alternative version of samba which can be compiled?
 
Dare I ask if there's any progress here? I just copied a file from the virtual disk to my pc using Samba and 1.98GB was transferred - which turns out to be the first third of the movie in question. Samba also shows the source file as 1.98GB, but ftp shows it to be 6GB.

This isn't the end of the world: it's not as if Samba transfers are greased-lightning fast and ftp is a total dog. However, given the size of HD files on the Hummy, a 2GB file size limit leaves Samba in the pretty-much-useless category.
 
Nobody seems to have reacted to my suggestion that it is a 32-bit overflow. Not knowing my arse from my elbow, isn't it just a case of changing the relevant variables to 64-bit and recompiling?
 
Nobody seems to have reacted to my suggestion that it is a 32-bit overflow. Not knowing my arse from my elbow, isn't it just a case of changing the relevant variables to 64-bit and recompiling?

As Raydon said he compiled it with large file support so either it didn't compile correctly or there is a bug in the code somewhere.
 
I have recompiled it with large file support but since af123 is away you may have to wait a while for an "official" package.

However, during testing I tried to copy a single large file onto a Linux server by various means (samba, cifs, ftp & rsync). Only ftp and samba managed to copy the file correctly once. The other protocols produced inconsistent and corrupt copies. FTP & samba also produced incorrect results after the first correct copy. I suspect an error in the kernel networking code or hardware but it really needs more tests to narrow down what is actually going on. I think this may go some way to explain the corrupt copies people were seeing in the FTP thread
 
Applied the fix and stopped/restarted the service. File sizes still don't show correct values above 4GB. Am I missing something?
 
The correct file sizes are shown for me. Are you sure it is running the new version 2.2.12-2?
 
Just upgraded and now appears to be showing the correct file sizes (on my HD don't have a HDR), will try copying something large across later to test.
 
Back
Top