X-Git-Url: https://git.novaco.in/?a=blobdiff_plain;f=HOWTO.md;h=a693a96c1d2471dda75699a19af43947bcd380a9;hb=83c23065055998912c661ff49edf93568c488689;hp=3e7755f52354380cd329c05da21474f64496adf3;hpb=2fd0a9bfa81a22f2789f518d1e3e9d3767ad0d7a;p=electrum-server.git diff --git a/HOWTO.md b/HOWTO.md index 3e7755f..a693a96 100644 --- a/HOWTO.md +++ b/HOWTO.md @@ -100,7 +100,7 @@ our ~/bin directory: # apt-get install make g++ python-leveldb libboost-all-dev libssl-dev libdb++-dev pkg-config libminiupnpc-dev git # su - novacoin - $ cd ~/src && git clone https://github.com/nova-project/novacoin.git + $ cd ~/src && git clone https://github.com/novacoin-project/novacoin.git $ cd novacoin/src $ make -f makefile.unix $ strip novacoind @@ -179,16 +179,16 @@ The section in the electrum server configuration file (see step 10) looks like t ### Step 8. Import blockchain into the database or download it -It's recommended to fetch a pre-processed leveldb from the net +It's recommended to fetch a pre-processed leveldb from the net. -You can fetch recent copies of electrum leveldb databases and further instructions -from the Electrum full archival server foundry at: -http://foundry.electrum.org/ +You can fetch recent copies of electrum leveldb databases from novacoin sourceforge page at: + +http://sourceforge.net/projects/novacoin/files/electrum-foundry/ Alternatively if you have the time and nerve you can import the blockchain yourself. -As of April 2014 it takes between two days and over a week to import 300k of blocks, depending -on CPU speed, I/O speed and selected pruning limit. +As of July 2014 it takes about one hour to import 110k of blocks, depending on CPU speed, +I/O speed and selected pruning limit. It's considerably faster and strongly recommended to index in memory. You can use /dev/shm or or create a tmpfs which will also use swap if you run out of memory: @@ -200,8 +200,8 @@ RAM but add 15 gigs of swap from a file that's fine too. tmpfs is rather smart t used parts. It's fine to use a file on a SSD for swap in thise case. It's not recommended to do initial indexing of the database on a SSD because the indexing process -does at least 20 TB (!) of disk writes and puts considerable wear-and-tear on a SSD. It's a lot better -to use tmpfs and just swap out to disk when necessary. +puts considerable wear-and-tear on a SSD. It's a lot better to use tmpfs and just swap out to disk + when necessary. Databases have grown to roughly 8 GB in April 2014, give or take a gigabyte between pruning limits 100 and 10000. Leveldb prunes the database from time to time, so it's not uncommon to see databases