issue so you can continue following this howto.
**Software.** A recent Linux distribution with the following software
-installed: `python`, `easy_install`, `git`, a SQL server, standard C/C++
+installed: `python`, `easy_install`, `git`, standard C/C++
build chain. You will need root access in order to install other software or
-Python libraries. You will need access to the SQL server to create users and
-databases.
-
-**Hardware.** It's recommended to run a pruning server with leveldb.
-It is a light setup with diskspace requirements well under 1 GB growing
-very moderately and less taxing on I/O and CPU once it's up and running.
-Full (archival) servers on the other hand use SQL. At the time of this writing,
-the Bitcoin blockchain is 5.5 GB large. The corresponding SQL database is
-about 4 times larger, so you should have a minimum of 22 GB free space just
-for SQL, growing continuously.
-CPU speed is also important, mostly for the initial block chain import, but
-also if you plan to run a public Electrum server, which could serve tens
-of concurrent requests. See step 6 below for some initial import benchmarks
-on SQL.
+Python libraries.
+
+**Hardware.** The lightest setup is a pruning server with diskspace
+requirements well under 1 GB growing very moderately and less taxing
+on I/O and CPU once it's up and running. However note that you also need
+to run bitcoind and keep a copy of the full blockchain, which is roughly
+9 GB in April 2013. If you have less than 2 GB of RAM make sure you limit
+bitcoind to 8 concurrent connections. If you have more ressources to
+spare you can run the server with a higher limit of historic transactions
+per address. CPU speed is also important, mostly for the initial block
+chain import, but also if you plan to run a public Electrum server, which
+could serve tens of concurrent requests. Any multi-core x86 CPU ~2009 or
+newer other than Atom should do for good performance.
Instructions
------------
-### Step 0. Create a user for running bitcoind and Electrum server
+### Step 1. Create a user for running bitcoind and Electrum server
This step is optional, but for better security and resource separation I
suggest you create a separate user just for running `bitcoind` and Electrum.
PATH="$HOME/bin:$PATH"
-### Step 1. Download and install Electrum
+### Step 2. Download and install Electrum
We will download the latest git snapshot for Electrum and 'install' it in
our ~/bin directory:
$ chmod +x ~/src/electrum/server/server.py
$ ln -s ~/src/electrum/server/server.py ~/bin/electrum
-### Step 2. Download Bitcoind from git & patch it
+### Step 3. Download Bitcoind stable from git & patch it
-In order for the latest versions of Electrum to work properly we will need to use the latest
-build from Git and also patch it with an electrum specific patch.
-Please make sure you run a version of bitcoind from git from at least December 2012 or newer:
+In order for the latest versions of Electrum to work properly we will need to use
+bitcoind 0.8.1 stable or higher. It can be downloaded from github and
+it needs to be patched with an electrum specific patch.
- $ cd src && git clone git://github.com/bitcoin/bitcoin.git
- $ cd bitcoin
+ $ cd src && wget https://github.com/bitcoin/bitcoin/archive/v0.8.1.tar.gz
+ $ tar xfz v0.8.1.tar.gz
+ $ cd bitcoin-0.8.1
$ patch -p1 < ~/src/electrum/server/patch/patch
- $ cd src && make -f makefile.unix
+ $ cd src && make USE_UPNP= -f makefile.unix
-### Step 3. Configure and start bitcoind
+### Step 4. Configure and start bitcoind
In order to allow Electrum to "talk" to `bitcoind`, we need to set up a RPC
username and password for `bitcoind`. We will then start `bitcoind` and
time, running as the 'bitcoin' user. Check your system documentation to
find out the best way to do this.
-
-### Step 4. Select your backend - pruning leveldb or full abe server
-
-Electrum server can currently be operated in two modes - as a pruning server
-or as a full server. The pruning server uses leveldb and keeps a smaller and
-faster database by pruning spent transactions. It's a lot quicker to get up
-and running and requires less maintenance and diskspace than the full abe
-server.
-
-The full version uses abe as a backend. While the blockchain in bitcoind
-is at roughly 5.5 GB in January 2013, the abe mysql for a full server requires
-~25 GB diskspace for innodb and can take a week or two (!) to freshly index
-on most but the fastest of hardware.
-
-Full servers are useful for recovering all past transactions when restoring
-from seed. Those are then stored in electrum.dat and won't need to be recovered
-until electrum.dat is removed. Pruning servers summarize spent transactions
-when restoring from seed which can be feature. Once seed recovery is done
-switching between pruning and full servers can be done at any time without effect
-to the transaction history stored in electrum.dat.
-
-While it's useful for Electrum to have a number of full servers it is
-expected that the vast majority of servers available publicly will be
-pruning servers.
-
-If you decide to setup a pruning server with leveldb take a break from this
-document, read and work through README.leveldb then come back
-install jsonrcp (but not abe) from step 5 and then skip to step 8
-
### Step 5. Install Electrum dependencies
Electrum server depends on various standard Python libraries. These will be
already installed on your distribution, or can be installed with your
-package manager. Electrum also depends on two Python libraries which we wil
-l need to install "by hand": `Abe` and `JSONRPClib`.
+package manager. Electrum also depends on two Python libraries which we will
+need to install "by hand": `JSONRPClib`.
$ sudo easy_install jsonrpclib
- $ cd ~/src
- $ git clone https://github.com/jtobey/bitcoin-abe
- $ cd bitcoin-abe
- $ git checkout c2a9969e20305faa41c40ae47533f2138f222ffc
- $ sudo python setup.py install
-
-Electrum server does not currently support abe 0.7.2+ so please stick
-with a specific commit between 0.7.1 and 0.7.2 for the time being.
-
-Please note that the path below might be slightly different on your system,
-for example python2.6 or 2.8.
-
- $ sudo chmod +x /usr/local/lib/python2.7/dist-packages/Abe/abe.py
- $ ln -s /usr/local/lib/python2.7/dist-packages/Abe/abe.py ~/bin/abe
-
-
-### Step 6. Configure the database
+ $ sudo apt-get install python-openssl
-Electrum server uses a SQL database to store the blockchain data. In theory,
-it supports all databases supported by Abe. At the time of this writing,
-MySQL and PostgreSQL are tested and work ok, SQLite was tested and *does not
-work* with Electrum server.
+### Step 6. Install leveldb
-For MySQL:
+ $ sudo apt-get install python-leveldb
+
+See the steps in README.leveldb for further details, especially if your system
+doesn't have the python-leveldb package.
- $ mysql -u root -p
- mysql> create user 'electrum'@'localhost' identified by '<db-password>';
- mysql> create database electrum;
- mysql> grant all on electrum.* to 'electrum'@'localhost';
- mysql> exit
+### Step 7. Select your limit
-For PostgreSQL:
+Electrum server uses leveldb to store transactions. You can choose
+how many spent transactions per address you want to store on the server.
+The default is 100, but there are also servers with 1000 or even 10000.
+Very few addresses have more than 10000 transactions. A limit this high
+can be considered to be equivalent to a "full" server. Full servers previously
+used abe to store the blockchain. The use of abe for electrum servers is now
+deprecated.
- TBW!
-
-### Step 7. Configure Abe and import blockchain into the database
-
-When you run Electrum server for the first time, it will automatically
-import the blockchain into the database, so it is safe to skip this step.
-However, our tests showed that, at the time of this writing, importing the
-blockchain via Abe is much faster (about 20-30 times faster) than
-allowing Electrum to do it.
-
- $ cp ~/src/bitcoin-abe/abe.conf ~/abe.conf
- $ $EDITOR ~/abe.conf
-
-For MySQL, you need these lines:
-
- dbtype MySQLdb
- connect-args = { "db" : "electrum", "user" : "electrum" , "passwd" : "<database-password>" }
-
-For PostgreSQL, you need these lines:
-
- TBD!
-
-Start Abe:
-
- $ abe --config ~/abe.conf
-
-Abe will now start to import blocks. You will see a lot of lines like this:
-
- 'block_tx <block-number> <tx-number>'
+The pruning server uses leveldb and keeps a smaller and
+faster database by pruning spent transactions. It's a lot quicker to get up
+and running and requires less maintenance and diskspace than abe.
-You should wait until you see this message on the screen:
+The section in the configuration file looks like this:
- Listening on http://localhost:2750
+ [leveldb]
+ path = /path/to/your/database
+ # for each address, history will be pruned if it is longer than this limit
+ pruning_limit = 100
-It means the blockchain is imported and you can exit Abe by pressing CTRL-C.
-You will not need to run Abe again after this step, Electrum server will
-update the blockchain by itself. We only used Abe because it is much faster
-for the initial import.
+### Step 8. Import blockchain into the database or download it
-Important notice: This is a *very* long process. Even on fast machines,
-expect it to take hours. Here are some benchmarks for importing
-~196K blocks (size of the Bitcoin blockchain in Septeber 2012):
+As of April 2013 it takes between 6-24 hours to import 230k of blocks, depending
+on CPU speed, I/O speed and selected pruning limit.
- * System 1: ~9 hours.
- * CPU: Intel Core i7 Q740 @ 1.73GHz
- * HDD: very fast SSD
- * System 2: ~55 hours.
- * CPU: Intel Xeon X3430 @ 2.40GHz
- * HDD: 2 x SATA in a RAID1.
+It's considerably faster to index in memory. You can use /dev/shm or indexing in RAM
+or create a tmpfs which will also use swap if you run out of memory:
-### Step 7b. Alternatively: Fetch abe blockchain from the net for import
+ $ sudo mount -t tmpfs -o rw,nodev,nosuid,noatime,size=6000M,mode=0777 none /tmpfs
-It's much faster to import an existing dataset than to index the blockchain
-using abe yourself.
+At limit 100 the database comes to 2,6 GB with 230k blocks and takes roughly 6h to import in /dev/shm.
+At limit 1000 the database comes to 3,0 GB with 230k blocks and takes roughly 10h to import in /dev/shm.
+At limit 10000 the database comes to 3,5 GB with 230k blocks and takes roughly 24h to import in /dev/shm.
-Importing a mysql dump of ~8 GB takes around 18-20 hours on a regular HDD
-and can be sped up by using SSDs or importing into /dev/shm memory
+Alternatively you can fetch a pre-processed leveldb from the net
-You can fetch recent copies of mysql dumps and further instructions
+You can fetch recent copies of electrum leveldb databases and further instructions
from the Electrum full archival server foundry at:
http://electrum-foundry.no-ip.org/
-### Step 8. Configure Electrum server
+### Step 9. Configure Electrum server
Electrum reads a config file (/etc/electrum.conf) when starting up. This
file includes the database setup, bitcoind RPC setup, and a few other
using openssl. Otherwise you can just comment out the SSL / HTTPS ports and run
without.
-### Step 9. (Finally!) Run Electrum server
+### Step 10. (Finally!) Run Electrum server
The magic moment has come: you can now start your Electrum server:
`~/src/electrum/server`. You can use them as a starting point to create a
init script for your system.
-### Step 10. Test the Electrum server
+### Step 11. Test the Electrum server
We will assume you have a working Electrum client, a wallet and some
transactions history. You should start the client and click on the green
response time in the Server selection window. You should send/receive some
bitcoins to confirm that everything is working properly.
-### Step 11. Join us on IRC, subscribe to the server thread
+### Step 12. Join us on IRC, subscribe to the server thread
Say hi to the dev crew, other server operators and fans on
irc.freenode.net #electrum and we'll try to congratulate you
[leveldb]
path = /path/to/your/database
+pruning_limit = 10
______________________________________________________________
./server load : view the size of the queue
+______________________
+Troubleshooting:
+
+* if your server or bitcoind is killed because is uses too much
+memory, configure bitcoind to limit the number of connections
+
+* if you see "Too many open files" errors, you may need to increase
+your user's File Descriptors limit. For this, see
+http://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/
Features
--------
- * The server uses a bitcoind and bitcoin-abe or a leveldb backend.
+ * The server uses a bitcoind and a leveldb backend.
* The server code is open source. Anyone can run a server, removing single
points of failure concerns.
* The server knows which set of Bitcoin addresses belong to the same wallet,
------------
1. To install and run a pruning server (easiest setup) see README.leveldb
- 2. Install [bitcoin-abe](https://github.com/jtobey/bitcoin-abe).
- 3. Install [jsonrpclib](https://github.com/joshmarshall/jsonrpclib).
- 4. Launch the server: `nohup python -u server.py > /var/log/electrum.log &`
+ 2. Install [jsonrpclib](https://github.com/joshmarshall/jsonrpclib).
+ 3. Launch the server: `nohup python -u server.py > /var/log/electrum.log &`
or use the included `start` script.
See the included `HOWTO.md` for greater detail on the installation process.
-### Important Note
-
-Do not run bitcoin-abe and electrum-server simultaneously, because they will
-both try to update the database.
-
-If you want bitcoin-abe to be available on your website, run it with
-the `--no-update` option.
-
-### Upgrading Abe
-
-If you upgrade abe, you might need to update the database. In the abe directory, type:
-
- python -m Abe.abe --config=abe.conf --upgrade
-
License
-------
self.address_queue = Queue()
self.dbpath = config.get('leveldb', 'path')
+ self.pruning_limit = config.getint('leveldb', 'pruning_limit')
+ self.db_version = 1 # increase this when database needs to be updated
self.dblock = threading.Lock()
try:
self.sent_header = None
try:
- hash_160 = bc_address_to_hash_160("1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa")
- self.db.Get(hash_160)
- print_log("Your database '%s' is deprecated. Please create a new database"%self.dbpath)
- self.shared.stop()
- return
- except:
- pass
-
- try:
hist = self.deserialize(self.db.Get('height'))
- self.last_hash, self.height, _ = hist[0]
- print_log("hist", hist)
+ self.last_hash, self.height, db_version = hist[0]
+ print_log("Database version", self.db_version)
+ print_log("Blockchain height", self.height)
except:
- #traceback.print_exc(file=sys.stdout)
+ traceback.print_exc(file=sys.stdout)
print_log('initializing database')
self.height = 0
self.last_hash = '000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f'
+ db_version = self.db_version
+
+ # check version
+ if self.db_version != db_version:
+ print_log("Your database '%s' is deprecated. Please create a new database"%self.dbpath)
+ self.shared.stop()
+ return
# catch_up headers
self.init_headers(self.height)
def serialize(self, h):
s = ''
for txid, txpos, height in h:
- s += txid + int_to_hex(txpos, 4) + int_to_hex(height, 4)
- return s.decode('hex')
+ s += self.serialize_item(txid, txpos, height)
+ return s
+
+ def serialize_item(self, txid, txpos, height, spent=chr(0)):
+ s = (txid + int_to_hex(txpos, 4) + int_to_hex(height, 3)).decode('hex') + spent
+ return s
+
+ def deserialize_item(self,s):
+ txid = s[0:32].encode('hex')
+ txpos = int(rev_hex(s[32:36].encode('hex')), 16)
+ height = int(rev_hex(s[36:39].encode('hex')), 16)
+ spent = s[39:40]
+ return (txid, txpos, height, spent)
def deserialize(self, s):
h = []
while s:
- txid = s[0:32].encode('hex')
- txpos = int(rev_hex(s[32:36].encode('hex')), 16)
- height = int(rev_hex(s[36:40].encode('hex')), 16)
+ txid, txpos, height, spent = self.deserialize_item(s[0:40])
h.append((txid, txpos, height))
- s = s[40:]
+ if spent == chr(1):
+ txid, txpos, height, spent = self.deserialize_item(s[40:80])
+ h.append((txid, txpos, height))
+ s = s[80:]
return h
def block2header(self, b):
hist = []
is_known = False
- # should not be necessary
+ # sort history, because redeeming transactions are next to the corresponding txout
hist.sort(key=lambda tup: tup[2])
- # check uniqueness too...
# add memory pool
with self.mempool_lock:
for txid in self.mempool_hist.get(addr, []):
hist.append((txid, 0, 0))
- hist = map(lambda x: {'tx_hash': x[0], 'height': x[2]}, hist)
+ # uniqueness
+ hist = set(map(lambda x: (x[0], x[2]), hist))
+
+ # convert to dict
+ hist = map(lambda x: {'tx_hash': x[0], 'height': x[1]}, hist)
+
# add something to distinguish between unused and empty addresses
if hist == [] and is_known:
hist = ['*']
return {"block_height": height, "merkle": s, "pos": tx_pos}
+
def add_to_history(self, addr, tx_hash, tx_pos, tx_height):
# keep it sorted
- s = (tx_hash + int_to_hex(tx_pos, 4) + int_to_hex(tx_height, 4)).decode('hex')
+ s = self.serialize_item(tx_hash, tx_pos, tx_height) + 40*chr(0)
+ assert len(s) == 80
serialized_hist = self.batch_list[addr]
- l = len(serialized_hist)/40
+ l = len(serialized_hist)/80
for i in range(l-1, -1, -1):
- item = serialized_hist[40*i:40*(i+1)]
- item_height = int(rev_hex(item[36:40].encode('hex')), 16)
- if item_height < tx_height:
- serialized_hist = serialized_hist[0:40*(i+1)] + s + serialized_hist[40*(i+1):]
+ item = serialized_hist[80*i:80*(i+1)]
+ item_height = int(rev_hex(item[36:39].encode('hex')), 16)
+ if item_height <= tx_height:
+ serialized_hist = serialized_hist[0:80*(i+1)] + s + serialized_hist[80*(i+1):]
break
else:
serialized_hist = s + serialized_hist
txo = (tx_hash + int_to_hex(tx_pos, 4)).decode('hex')
self.batch_txio[txo] = addr
- def remove_from_history(self, addr, tx_hash, tx_pos):
- txi = (tx_hash + int_to_hex(tx_pos, 4)).decode('hex')
- if addr is None:
- try:
- addr = self.batch_txio[txi]
- except:
- raise BaseException(tx_hash, tx_pos)
+
+ def revert_add_to_history(self, addr, tx_hash, tx_pos, tx_height):
serialized_hist = self.batch_list[addr]
+ s = self.serialize_item(tx_hash, tx_pos, tx_height) + 40*chr(0)
+ if serialized_hist.find(s) == -1: raise
+ serialized_hist = serialized_hist.replace(s, '')
+ self.batch_list[addr] = serialized_hist
+
- l = len(serialized_hist)/40
+
+ def prune_history(self, addr, undo):
+ # remove items that have bit set to one
+ if undo.get(addr) is None: undo[addr] = []
+
+ serialized_hist = self.batch_list[addr]
+ l = len(serialized_hist)/80
for i in range(l):
- item = serialized_hist[40*i:40*(i+1)]
+ if len(serialized_hist)/80 < self.pruning_limit: break
+ item = serialized_hist[80*i:80*(i+1)]
+ if item[39:40] == chr(1):
+ assert item[79:80] == chr(2)
+ serialized_hist = serialized_hist[0:80*i] + serialized_hist[80*(i+1):]
+ undo[addr].append(item) # items are ordered
+ self.batch_list[addr] = serialized_hist
+
+
+ def revert_prune_history(self, addr, undo):
+ # restore removed items
+ serialized_hist = self.batch_list[addr]
+
+ if undo.get(addr) is not None:
+ itemlist = undo.pop(addr)
+ else:
+ return
+
+ if not itemlist: return
+
+ l = len(serialized_hist)/80
+ tx_item = ''
+ for i in range(l-1, -1, -1):
+ if tx_item == '':
+ if not itemlist:
+ break
+ else:
+ tx_item = itemlist.pop(-1) # get the last element
+ tx_height = int(rev_hex(tx_item[36:39].encode('hex')), 16)
+
+ item = serialized_hist[80*i:80*(i+1)]
+ item_height = int(rev_hex(item[36:39].encode('hex')), 16)
+
+ if item_height < tx_height:
+ serialized_hist = serialized_hist[0:80*(i+1)] + tx_item + serialized_hist[80*(i+1):]
+ tx_item = ''
+
+ else:
+ serialized_hist = ''.join(itemlist) + tx_item + serialized_hist
+
+ self.batch_list[addr] = serialized_hist
+
+
+ def set_spent_bit(self, addr, txi, is_spent, txid=None, index=None, height=None):
+ serialized_hist = self.batch_list[addr]
+ l = len(serialized_hist)/80
+ for i in range(l):
+ item = serialized_hist[80*i:80*(i+1)]
if item[0:36] == txi:
- height = int(rev_hex(item[36:40].encode('hex')), 16)
- serialized_hist = serialized_hist[0:40*i] + serialized_hist[40*(i+1):]
+ if is_spent:
+ new_item = item[0:39] + chr(1) + self.serialize_item(txid, index, height, chr(2))
+ else:
+ new_item = item[0:39] + chr(0) + chr(0)*40
+ serialized_hist = serialized_hist[0:80*i] + new_item + serialized_hist[80*(i+1):]
break
else:
+ self.shared.stop()
hist = self.deserialize(serialized_hist)
- raise BaseException("prevout not found", addr, hist, tx_hash, tx_pos)
+ raise BaseException("prevout not found", addr, hist, txi.encode('hex'))
self.batch_list[addr] = serialized_hist
- return height, addr
+
+
+ def unset_spent_bit(self, addr, txi):
+ self.set_spent_bit(addr, txi, False)
+ self.batch_txio[txi] = addr
+
def deserialize_block(self, block):
txlist = block.get('tx')
t00 = time.time()
+ # undo info
+ if revert:
+ undo_info = self.get_undo_info(block_height)
+ else:
+ undo_info = {}
+
+
if not revert:
# read addresses of tx inputs
for tx in txdict.values():
for txi in block_inputs:
try:
addr = self.db.Get(txi)
- except:
+ except KeyError:
# the input could come from the same block
continue
+ except:
+ traceback.print_exc(file=sys.stdout)
+ self.shared.stop()
+ raise
+
self.batch_txio[txi] = addr
addr_to_read.append(addr)
for x in tx.get('outputs'):
txo = (txid + int_to_hex(x.get('index'), 4)).decode('hex')
block_outputs.append(txo)
+ addr_to_read.append( x.get('address') )
+
+ undo = undo_info.get(txid)
+ for i, x in enumerate(tx.get('inputs')):
+ addr = undo['prev_addr'][i]
+ addr_to_read.append(addr)
+
+
+
+
# read histories of addresses
for txid, tx in txdict.items():
for addr in addr_to_read:
try:
self.batch_list[addr] = self.db.Get(addr)
- except:
+ except KeyError:
self.batch_list[addr] = ''
+ except:
+ traceback.print_exc(file=sys.stdout)
+ self.shared.stop()
+ raise
- if revert:
- undo_info = self.get_undo_info(block_height)
- # print "undo", block_height, undo_info
- else:
- undo_info = {}
# process
t1 = time.time()
if revert:
tx_hashes = tx_hashes[::-1]
+
+
for txid in tx_hashes: # must be ordered
tx = txdict[txid]
if not revert:
- undo = []
- for x in tx.get('inputs'):
- prevout_height, prevout_addr = self.remove_from_history(None, x.get('prevout_hash'), x.get('prevout_n'))
- undo.append((prevout_height, prevout_addr))
- undo_info[txid] = undo
+ undo = { 'prev_addr':[] } # contains the list of pruned items for each address in the tx; also, 'prev_addr' is a list of prev addresses
+
+ prev_addr = []
+ for i, x in enumerate(tx.get('inputs')):
+ txi = (x.get('prevout_hash') + int_to_hex(x.get('prevout_n'), 4)).decode('hex')
+ addr = self.batch_txio[txi]
+ # add redeem item to the history.
+ # add it right next to the input txi? this will break history sorting, but it's ok if I neglect tx inputs during search
+ self.set_spent_bit(addr, txi, True, txid, i, block_height)
+
+ # when I prune, prune a pair
+ self.prune_history(addr, undo)
+ prev_addr.append(addr)
+
+ undo['prev_addr'] = prev_addr
+
+ # here I add only the outputs to history; maybe I want to add inputs too (that's in the other loop)
for x in tx.get('outputs'):
- self.add_to_history(x.get('address'), txid, x.get('index'), block_height)
+ addr = x.get('address')
+ self.add_to_history(addr, txid, x.get('index'), block_height)
+ self.prune_history(addr, undo) # prune here because we increased the length of the history
+
+ undo_info[txid] = undo
else:
+
+ undo = undo_info.pop(txid)
+
for x in tx.get('outputs'):
- self.remove_from_history(x.get('address'), txid, x.get('index'))
+ addr = x.get('address')
+ self.revert_prune_history(addr, undo)
+ self.revert_add_to_history(addr, txid, x.get('index'), block_height)
+
+ prev_addr = undo.pop('prev_addr')
+ for i, x in enumerate(tx.get('inputs')):
+ addr = prev_addr[i]
+ self.revert_prune_history(addr, undo)
+ txi = (x.get('prevout_hash') + int_to_hex(x.get('prevout_n'), 4)).decode('hex')
+ self.unset_spent_bit(addr, txi)
- i = 0
- for x in tx.get('inputs'):
- prevout_height, prevout_addr = undo_info.get(txid)[i]
- i += 1
+ assert undo == {}
- # read the history into batch list
- if self.batch_list.get(prevout_addr) is None:
- self.batch_list[prevout_addr] = self.db.Get(prevout_addr)
+ if revert:
+ assert undo_info == {}
- # re-add them to the history
- self.add_to_history(prevout_addr, x.get('prevout_hash'), x.get('prevout_n'), prevout_height)
- # print_log("new hist for", prevout_addr, self.deserialize(self.batch_list[prevout_addr]) )
# write
max_len = 0
batch = leveldb.WriteBatch()
for addr, serialized_hist in self.batch_list.items():
batch.Put(addr, serialized_hist)
- l = len(serialized_hist)
+ l = len(serialized_hist)/80
if l > max_len:
max_len = l
max_addr = addr
else:
# restore spent inputs
for txio, addr in self.batch_txio.items():
+ # print "restoring spent input", repr(txio)
batch.Put(txio, addr)
# delete spent outputs
for txo in block_outputs:
batch.Delete(txo)
# add the max
- batch.Put('height', self.serialize([(block_hash, block_height, 0)]))
+ batch.Put('height', self.serialize([(block_hash, block_height, self.db_version)]))
# actual write
self.db.Write(batch, sync=sync)
try:
addr = self.db.Get(txi)
except:
- continue
+ tx_prev = self.get_mempool_transaction(x.get('prevout_hash'))
+ try:
+ addr = tx_prev['outputs'][x.get('prevout_n')]['address']
+ if not addr: continue
+ except:
+ continue
l = self.mempool_addresses.get(tx_hash, [])
if addr not in l:
l.append(addr)
if config.get('server', 'coin') == 'litecoin':
self.prepend = 'EL_'
self.pruning = config.get('server', 'backend') == 'leveldb'
+ if self.pruning:
+ self.pruning_limit = config.get('leveldb', 'pruning_limit')
self.nick = self.prepend + self.nick
def get_peers(self):
def getname(self):
s = 'v' + VERSION + ' '
if self.pruning:
- s += 'p '
- if self.stratum_tcp_port:
- s += 't' + self.stratum_tcp_port + ' '
- if self.stratum_http_port:
- s += 'h' + self.stratum_http_port + ' '
- if self.stratum_tcp_port:
- s += 's' + self.stratum_tcp_ssl_port + ' '
- if self.stratum_http_port:
- s += 'g' + self.stratum_http_ssl_port + ' '
+ s += 'p' + self.pruning_limit + ' '
+
+ def add_port(letter, number):
+ DEFAULT_PORTS = {'t':'50001', 's':'50002', 'h':'8081', 'g':'8082'}
+ if not number: return ''
+ if DEFAULT_PORTS[letter] == number:
+ return letter + ' '
+ else:
+ return letter + number + ' '
+
+ s += add_port('t',self.stratum_tcp_port)
+ s += add_port('h',self.stratum_http_port)
+ s += add_port('s',self.stratum_tcp_ssl_port)
+ s += add_port('g',self.stratum_http_ssl_port)
return s
def run(self):
#ssl_certfile = /path/to/electrum-server.crt
#ssl_keyfile = /path/to/electrum-server.key
-#default backend is abe
-#backend = leveldb
+# default backend is leveldb (pruning server)
+backend = leveldb
-#for abe only, number of requests per single hash
-#limit = 1000
+[leveldb]
+path = /path/to/your/database
+# for each address, history will be pruned if it is longer than this limit
+pruning_limit = 100
-[database]
-type = MySQLdb
-database = electrum
-username = electrum
-password = secret
+
+# ABE configuration for full servers
+# Backends other than level db are deprecated and currently unsupported
+
+# number of requests per single hash
+# limit = 1000
+
+# [database]
+# type = MySQLdb
+# database = electrum
+# username = electrum
+# password = secret
# [database]
# type = psycopg2
# type = sqlite3
# database = electrum.sqlite
-# comment database section above
-# if you use backend = leveldb
-# [leveldb]
-# path = /path/to/your/database
[bitcoind]
host = localhost
config.set('server', 'irc_nick', '')
config.set('server', 'coin', '')
config.set('server', 'datadir', '')
- config.add_section('database')
- config.set('database', 'type', 'psycopg2')
- config.set('database', 'database', 'abe')
- config.set('database', 'limit', '1000')
- config.set('server', 'backend', 'abe')
+
+ # use leveldb as default
+ config.set('server', 'backend', 'leveldb')
+ config.add_section('leveldb')
+ config.set('leveldb', 'path', '/dev/shm/electrum_db')
+ config.set('leveldb', 'pruning_limit', '100')
for path in ('/etc/', ''):
filename = path + 'electrum.conf'
except BaseException, e:
error = str(e)
print_log("cannot start TCP session", error)
+ time.sleep(0.1)
continue
self.dispatcher.add_session(session)
self.dispatcher.collect_garbage()
-VERSION = "0.7"
+VERSION = "0.8"