NO LONGER MAINTAINED
The twitter_ebooks binary now supports functionality similar to this. Use that instead.
npm install -g backup-to-ebooks-archive
backup-to-ebooks-archive <backup.csv> <export.json>
backup-to-ebooks-archive tweets.csv username.json
Then from there you can import your full Twitter archive into your ebooks bot.
$ ebooks consume username.json Faraday::Builder is now Faraday::RackBuilder. Reading json corpus from username.json Removing commented lines and sorting mentions Segmenting text into sentences Tokenizing 14600 statements and 12992 mentions Ranking keywords Corpus consumed to model/username.model
When running the
archive ability of twitter_ebooks, it only imported a paltry 3,000 of my 18,000 tweets, and I didn't want to mess with API limits to grab what I can already get via Twitter's export feature.