-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can I dump DBFs to CSVs faster? #38
Comments
Can you benchmark the imports alone? I presume much of your benchmark is spent import pandas, which is quite a heavy dependency (takes 0.7s, warm, on my machine). Plus there's a bit of overhead from the non-lazy DataFrame creation, but it's not terrible at 25% using my dummy dataset (1.1 MB, 20 columns, strings and ints). Edit: I have now tried it with a larger file (40 MB) to see if the lazy/eager approach makes a larger difference and indeed it does - 45% extra runtime under pandas. |
@kokes I ran the following which only imports Pandas and dbfread once and doesn't re-execute Python during each iteration. This wasn't run on Python 2.7.12 due to $ vi run_3.py from dbfread import DBF
import pandas as pd
def get_dbf():
yield from DBF('FILE.DBF', encoding='latin-1', char_decode_errors='strict')
for _ in range(0, 1000):
pd.DataFrame(get_dbf()).to_csv('FILE.csv', index=False, encoding='utf-8', mode='w') $ time python3 run_3.py # 11m30.328s Any more ideas of how we could get this closer to 74 seconds? |
data = DBF(...)
for el in data:
yield el |
It comes down to what the limits are. I tried running the three obvious pieces of code:
(I also tried the eager mode in dbfread itself, using This goes to show that the major overhead is this library and unless it's changed, you won't get faster than the baseline. So I wouldn't focus on the CSV writer, I'd look at the parsing itself, that is if you really want to go faster. |
dbfread author here. I guess the main reason dbf2csv is faster is that it's written in C++. A lot of the operations in dbfread are fiddling with little bits of binary data which would be a lot faster in a language closer to the hardware. That said I'm sure dbfread could be faster and I'd be interested in any suggestions you might have. @kokes def load(self):
if not self.loaded:
self._records = list(self._iter_records(b' '))
self._deleted = list(self._iter_records(b'*')) |
In the loop you use to save the CSV, you are instantiating a new DataFrame object at every iteration, this definetly adds up in wasted time. You don't need pandas here. When you iterate over a DBF object, you get an ordered dict. You can write them straight to the CSV file using csv.DictWriter as shown here. On a SSD drive I am getting ~8k lines written per second. And I am writing to a gzipped csv which adds the overhead of compression. |
We've already discussed this above: #38 (comment) And since we know how fast the parsing itself is (by not writing anything, in the same post), we know the perf ceiling of this library - and as the author notes, the difference against the other library is primarily due to the language used. I don't think there's much to add here. |
you're welcome. |
I ran a benchmark dumping a DBF to CSV 1,000 times using dbfread 2.0.7 and Pandas 0.24.1 and comparing it to https://github.com/yellowfeather/dbf2csv.
The file I used was a 1.5 MB DBF in 'FoxBASE+/Dbase III plus, no memory' format with 4,522 records and 40 columns made up of 36 numeric fields, 2 char fields and 2 date fields. I used CPython 2.7.12 for dbfread and I compiled dbf2csv using GCC 5.4.0 and libboost v1.58.0. I ran the test on a t3.2xlarge instance on AWS.
The disk can be written to at 115 MB/s according to:
dbfread and Pandas managed to write the CSV 1,000 in 26 minutes while dbf2csv took just under 74 seconds, a 21x difference in performance.
These were the steps I took during the benchmark:
The DBF would have sat in the page cache but nonetheless represented 1,489 MB of data read. The resulting CSV represented 638 MB. So this works out to 0.95 MB/s read and 0.41 MB/s written.
Do you know of any way I could see improved performance in this workload using dbfread?
The text was updated successfully, but these errors were encountered: