Lines that lack hash or dollar signs are pastes from config files. They
should be copied verbatim or adapted, without the indentation tab.
+apt-get install commands are suggestions for required dependencies.
+They conform to an Ubuntu 13.04 system but may well work with Debian
+or earlier and later versions of Ubuntu.
+
Prerequisites
-------------
perform the operation described here, you are expected to fix the
issue so you can continue following this howto.
-**Software.** A recent Linux distribution with the following software
-installed: `python`, `easy_install`, `git`, a SQL server, standard C/C++
+**Software.** A recent Linux 64-bit distribution with the following software
+installed: `python`, `easy_install`, `git`, standard C/C++
build chain. You will need root access in order to install other software or
-Python libraries. You will need access to the SQL server to create users and
-databases.
-
-**Hardware.** It's recommended to run a pruning server with leveldb.
-It is a light setup with diskspace requirements well under 1 GB growing
-very moderately and less taxing on I/O and CPU once it's up and running.
-Full (archival) servers on the other hand use SQL. At the time of this writing,
-the Bitcoin blockchain is 5.5 GB large. The corresponding SQL database is
-about 4 time larger, so you should have a minimum of 22 GB free space just
-for SQL, growing continuously.
-CPU speed is also important, mostly for the initial block chain import, but
-also if you plan to run a public Electrum server, which could serve tens
-of concurrent requests. See step 6 below for some initial import benchmarks
-on SQL.
+Python libraries.
+
+**Hardware.** The lightest setup is a pruning server with diskspace
+requirements well under 1 GB growing very moderately and less taxing
+on I/O and CPU once it's up and running. However note that you also need
+to run bitcoind and keep a copy of the full blockchain, which is roughly
+9 GB in April 2013. If you have less than 2 GB of RAM make sure you limit
+bitcoind to 8 concurrent connections. If you have more ressources to
+spare you can run the server with a higher limit of historic transactions
+per address. CPU speed is also important, mostly for the initial block
+chain import, but also if you plan to run a public Electrum server, which
+could serve tens of concurrent requests. Any multi-core x86 CPU ~2009 or
+newer other than Atom should do for good performance.
Instructions
------------
-### Step 0. Create a user for running bitcoind and Electrum server
+### Step 1. Create a user for running bitcoind and Electrum server
This step is optional, but for better security and resource separation I
suggest you create a separate user just for running `bitcoind` and Electrum.
(others might want to use `/usr/local/bin` instead). We will download source
code files to the `~/src` directory.
- # sudo adduser bitcoin
+ # sudo adduser bitcoin --disabled-password
# su - bitcoin
$ mkdir ~/bin ~/src
$ echo $PATH
PATH="$HOME/bin:$PATH"
-### Step 1. Download and install Electrum
+### Step 2. Download and install Electrum
We will download the latest git snapshot for Electrum and 'install' it in
our ~/bin directory:
$ mkdir -p ~/src/electrum
$ cd ~/src/electrum
+ $ sudo apt-get install git
$ git clone https://github.com/spesmilo/electrum-server.git server
$ chmod +x ~/src/electrum/server/server.py
- $ ln -s ~/src/electrum/server/server.py ~/bin/electrum
+ $ ln -s ~/src/electrum/server/server.py ~/bin/electrum-server
-### Step 2. Donwnload Bitcoind from git & patch it
+### Step 3. Download Bitcoind stable & patch it
-In order for the latest versions of Electrum to work properly we will need to use the latest
-build from Git and also patch it with an electrum specific patch.
+In order for the latest versions of Electrum to work properly we currently recommend bitcoind 0.8.5 stable.
+0.8.5 can be downloaded from github or sourceforge and it needs to be patched with an electrum specific patch.
+bitcoin@master i.e. git head may not currently work with electrum-server even if the patch applies cleanly.
- $ cd src && git clone git://github.com/bitcoin/bitcoin.git
- $ cd bitcoin
- $ patch -p1 < ~/src/electrum/server/patch/patch
- $ cd src && make -f makefile.unix
+ $ cd ~/src && wget http://sourceforge.net/projects/bitcoin/files/Bitcoin/bitcoin-0.8.5/bitcoin-0.8.5-linux.tar.gz
+ $ tar xfz bitcoin-0.8.5-linux.tar.gz
+ $ cd bitcoin-0.8.5-linux/src
+ $ patch -p1 < ~/src/electrum/server/patch/patch
+ $ cd src
+ $ sudo apt-get install make g++ python-leveldb libboost-all-dev libssl-dev libdb++-dev
+ $ make USE_UPNP= -f makefile.unix
+ $ strip ~/src/bitcoin-0.8.5-linux/src/src/bitcoind
+ $ ln -s ~/src/bitcoin-0.8.5-linux/src/src/bitcoind ~/bin/bitcoind
-### Step 3. Configure and start bitcoind
+### Step 4. Configure and start bitcoind
In order to allow Electrum to "talk" to `bitcoind`, we need to set up a RPC
username and password for `bitcoind`. We will then start `bitcoind` and
time, running as the 'bitcoin' user. Check your system documentation to
find out the best way to do this.
-
-### Step 4. Select your backend - pruning leveldb or full abe server
-
-Electrum server can currently be operated in two modes - as a pruning server
-or as a full server. The pruning server uses leveldb and keeps a smaller and
-faster database by pruning spent transactions. It's a lot quicker to get up
-and running and requires less maintenance and diskspace than the full abe
-server.
-
-The full version uses abe as a backend. While the blockchain in bitcoind
-is at roughly 5.5 GB in January 2013, the abe mysql for a full server requires
-~25 GB diskspace for innodb and can take a week or two (!) to freshly index
-on most but the fastest of hardware.
-
-Full servers are useful for recovering all past transactions when restoring
-from seed. Those are then stored in electrum.dat and won't need to be recovered
-until electrum.dat is removed. Pruning servers summarize spent transactions
-when restoring from seed which can be feature. Once seed recovery is done
-switching between pruning and full servers can be done at any time without effect
-to the transaction history stored in electrum.dat.
-
-While it's useful for Electrum to have a number of full servers it is
-expected that the vast majority of servers available publicly will be
-pruning servers.
-
-If you decide to setup a pruning server with leveldb take a break from this
-document, read and work through README.leveldb then come back
-install jsonrcp (but not abe) from step 5 and then skip to step 8
-
### Step 5. Install Electrum dependencies
Electrum server depends on various standard Python libraries. These will be
already installed on your distribution, or can be installed with your
-package manager. Electrum also depends on two Python libraries which we wil
-l need to install "by hand": `Abe` and `JSONRPClib`.
+package manager. Electrum also depends on two Python libraries which we will
+need to install "by hand": `JSONRPClib`.
+ $ sudo apt-get install python-setuptools
$ sudo easy_install jsonrpclib
- $ cd ~/src
- $ wget https://github.com/jtobey/bitcoin-abe/archive/v0.7.1.tar.gz
- $ cd bitcoin-abe
- $ sudo python setup.py install
-
-Electrum server does not currently support abe > 0.7.1 so please stick
-with 0.7.1 for the time being. If you're version is < 0.7 you need to upgrade
-to 0.7.1!
+ $ sudo apt-get install python-openssl
-Please note that the path below might be slightly different on your system,
-for example python2.6 or 2.8.
+### Step 6. Install leveldb
- $ sudo chmod +x /usr/local/lib/python2.7/dist-packages/Abe/abe.py
- $ ln -s /usr/local/lib/python2.7/dist-packages/Abe/abe.py ~/bin/abe
+ $ sudo apt-get install python-leveldb
+
+See the steps in README.leveldb for further details, especially if your system
+doesn't have the python-leveldb package.
+### Step 7. Select your limit
-### Step 6. Configure the database
+Electrum server uses leveldb to store transactions. You can choose
+how many spent transactions per address you want to store on the server.
+The default is 100, but there are also servers with 1000 or even 10000.
+Few addresses have more than 10000 transactions. A limit this high
+can be considered to be equivalent to a "full" server. Full servers previously
+used abe to store the blockchain. The use of abe for electrum servers is now
+deprecated.
-Electrum server uses a SQL database to store the blockchain data. In theory,
-it supports all databases supported by Abe. At the time of this writing,
-MySQL and PostgreSQL are tested and work ok, SQLite was tested and *does not
-work* with Electrum server.
-
-For MySQL:
-
- $ mysql -u root -p
- mysql> create user 'electrum'@'localhost' identified by '<db-password>';
- mysql> create database electrum;
- mysql> grant all on electrum.* to 'electrum'@'localhost';
- mysql> exit
+The pruning server uses leveldb and keeps a smaller and
+faster database by pruning spent transactions. It's a lot quicker to get up
+and running and requires less maintenance and diskspace than abe.
-For PostgreSQL:
+The section in the configuration file looks like this:
- TBW!
+ [leveldb]
+ path = /path/to/your/database
+ # for each address, history will be pruned if it is longer than this limit
+ pruning_limit = 100
-### Step 7. Configure Abe and import blockchain into the database
+### Step 8. Import blockchain into the database or download it
-When you run Electrum server for the first time, it will automatically
-import the blockchain into the database, so it is safe to skip this step.
-However, our tests showed that, at the time of this writing, importing the
-blockchain via Abe is much faster (about 20-30 times faster) than
-allowing Electrum to do it.
+As of April 2013 it takes between 6-24 hours to import 230k of blocks, depending
+on CPU speed, I/O speed and selected pruning limit.
- $ cp ~/src/bitcoin-abe/abe.conf ~/abe.conf
- $ $EDITOR ~/abe.conf
+It's considerably faster to index in memory. You can use /dev/shm or indexing in RAM
+or create a tmpfs which will also use swap if you run out of memory:
-For MySQL, you need these lines:
+ $ sudo mount -t tmpfs -o rw,nodev,nosuid,noatime,size=6000M,mode=0777 none /tmpfs
- dbtype MySQLdb
- connect-args = { "db" : "electrum", "user" : "electrum" , "passwd" : "<database-password>" }
+At limit 100 the database comes to 2,6 GB with 230k blocks and takes roughly 6h to import in /dev/shm.
+At limit 1000 the database comes to 3,0 GB with 230k blocks and takes roughly 10h to import in /dev/shm.
+At limit 10000 the database comes to 3,5 GB with 230k blocks and takes roughly 24h to import in /dev/shm.
-For PostgreSQL, you need these lines:
+Alternatively you can fetch a pre-processed leveldb from the net
- TBD!
+You can fetch recent copies of electrum leveldb databases and further instructions
+from the Electrum full archival server foundry at:
+http://foundry.electrum.org/
-Start Abe:
- $ abe --config ~/abe.conf
+### Step 9. Create a self-signed SSL cert
-Abe will now start to import blocks. You will see a lot of lines like this:
+To run SSL / HTTPS you need to generate a self-signed certificate
+using openssl. You could just comment out the SSL / HTTPS ports in the config and run
+without, but this is not recommended.
- 'block_tx <block-number> <tx-number>'
+Use the sample code below to create a self-signed cert with a recommended validity
+of 5 years. You may supply any information for your sign request to identify your server.
+They are not currently checked by the client except for the validity date.
+When asked for a challenge password just leave it empty and press enter.
-You should wait until you see this message on the screen:
+ $ openssl genrsa -des3 -passout pass:x -out server.pass.key 2048
+ $ openssl rsa -passin pass:x -in server.pass.key -out server.key
+ writing RSA key
+ $ rm server.pass.key
+ $ openssl req -new -key server.key -out server.csr
+ ...
+ Country Name (2 letter code) [AU]:US
+ State or Province Name (full name) [Some-State]:California
+ Common Name (eg, YOUR name) []: electrum-server.tld
+ ...
+ A challenge password []:
+ ...
- Listening on http://localhost:2750
+ $ openssl x509 -req -days 730 -in server.csr -signkey server.key -out server.crt
-It means the blockchain is imported and you can exit Abe by pressing CTRL-C.
-You will not need to run Abe again after this step, Electrum server will
-update the blockchain by itself. We only used Abe because it is much faster
-for the initial import.
+The server.crt file is your certificate suitable for the ssl_certfile= parameter and
+server.key corresponds to ssl_keyfile= in your electrum server config
-Important notice: This is a *very* long process. Even on fast machines,
-expect it to take hours. Here are some benchmarks for importing
-~196K blocks (size of the Bitcoin blockchain in Septeber 2012):
+Starting with Electrum 1.9 the client will learn and locally cache the SSL certificate
+for your server upon the first request to prevent man-in-the middle attacks for all
+further connections.
- * System 1: ~9 hours.
- * CPU: Intel Core i7 Q740 @ 1.73GHz
- * HDD: very fast SSD
- * System 2: ~55 hours.
- * CPU: Intel Xeon X3430 @ 2.40GHz
- * HDD: 2 x SATA in a RAID1.
+If your certificate is lost or expires on the server side you currently need to run
+your server with a different server name along with a new certificate for this server.
+Therefore it's a good idea to make an offline backup copy of your certificate and key
+in case you need to restore it.
-### Step 8. Configure Electrum server
+### Step 10. Configure Electrum server
Electrum reads a config file (/etc/electrum.conf) when starting up. This
file includes the database setup, bitcoind RPC setup, and a few other
Go through the sample config options and set them to your liking.
If you intend to run the server publicly have a look at README-IRC.md
-Ifu're looking to run SSL / HTTPS you need to generate a self-signed certificate
-using openssl. Otherwise you can just comment out the SSL / HTTPS ports and run
-without.
+### Step 11. Tweak your system for running electrum
+
+Electrum server currently needs quite a few file handles to use leveldb. It also requires
+file handles for each connection made to the server. It's good practice to increase the
+open files limit to 16k. This is most easily achived by sticking the value in .bashrc of the
+root user who usually passes this value to all unprivileged user sessions too.
+
+ $ sudo sed -i '$a ulimit -n 16384' /root/.bashrc
-### Step 9. (Finally!) Run Electrum server
+We're aware the leveldb part in electrum server may leak some memory and it's good practice to
+to either restart the server once in a while from cron (preferred) or to at least monitor
+it for crashes and then restart the server. Weekly restarts should be fine for most setups.
+If your server gets a lot of traffic and you have a limited amount of RAM you may need to restart
+more often.
+
+Two more things for you to consider:
+
+1. To increase security you may want to close bitcoind for incoming connections and connect outbound only
+
+2. Consider restarting bitcoind (together with electrum-server) on a weekly basis to clear out unconfirmed
+ transactions from the local the memory pool which did not propagate over the network
+
+### Step 12. (Finally!) Run Electrum server
The magic moment has come: you can now start your Electrum server:
- $ server
+ $ electrum-server
You should see this on the screen:
`~/src/electrum/server`. You can use them as a starting point to create a
init script for your system.
-### Step 10. Test the Electrum server
+### Step 13. Test the Electrum server
We will assume you have a working Electrum client, a wallet and some
transactions history. You should start the client and click on the green
response time in the Server selection window. You should send/receive some
bitcoins to confirm that everything is working properly.
-### Step 11. Join us on IRC
+### Step 13. Join us on IRC, subscribe to the server thread
Say hi to the dev crew, other server operators and fans on
irc.freenode.net #electrum and we'll try to congratulate you
on supporting the community by running an Electrum node
+
+If you're operating a public Electrum server please subscribe
+to or regulary check the following thread:
+https://bitcointalk.org/index.php?topic=85475.0
+It'll contain announcements about important updates to Electrum
+server required for a smooth user experience.
2. Install python-leveldb:
+Starting at Ubuntu 12.10 you can use apt to install leveldb. If you
+rather stay on 12.04 LTS you can use the backport and add
+"deb http://archive.ubuntu.com/ubuntu precise-backports main restricted universe"
+to your sources file. Install the package with:
+
sudo apt-get install python-leveldb
alternatively build yourself, see
[leveldb]
path = /path/to/your/database
+pruning_limit = 10
______________________________________________________________
./server load : view the size of the queue
+______________________
+Troubleshooting:
+
+* if your server or bitcoind is killed because is uses too much
+memory, configure bitcoind to limit the number of connections
+
+* if you see "Too many open files" errors, you may need to increase
+your user's File Descriptors limit. For this, see
+http://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/
Features
--------
- * The server uses a bitcoind and bitcoin-abe or a leveldb backend.
+ * The server uses a bitcoind and a leveldb backend.
* The server code is open source. Anyone can run a server, removing single
points of failure concerns.
* The server knows which set of Bitcoin addresses belong to the same wallet,
------------
1. To install and run a pruning server (easiest setup) see README.leveldb
- 2. Install [bitcoin-abe](https://github.com/jtobey/bitcoin-abe).
- 3. Install [jsonrpclib](https://github.com/joshmarshall/jsonrpclib).
- 4. Launch the server: `nohup python -u server.py > /var/log/electrum.log &`
+ 2. Install [jsonrpclib](https://github.com/joshmarshall/jsonrpclib).
+ 3. Launch the server: `nohup python -u server.py > /var/log/electrum.log &`
or use the included `start` script.
See the included `HOWTO.md` for greater detail on the installation process.
-### Important Note
-
-Do not run bitcoin-abe and electrum-server simultaneously, because they will
-both try to update the database.
-
-If you want bitcoin-abe to be available on your website, run it with
-the `--no-update` option.
-
-### Upgrading Abe
-
-If you upgrade abe, you might need to update the database. In the abe directory, type:
-
- python -m Abe.abe --config=abe.conf --upgrade
-
License
-------
from utils import *
-class AbeStore(Datastore.Datastore):
+class AbeStore(DataStore.DataStore):
def __init__(self, config):
conf = DataStore.CONFIG_DEFAULTS
print_log(' addrtype = 48')
self.addrtype = 48
- Datastore.Datastore.__init__(self, args)
+ DataStore.DataStore.__init__(self, args)
# Use 1 (Bitcoin) if chain_id is not sent
self.chain_id = self.datadirs[0]["chain_id"] or 1
"index": int(pos),
"value": int(value),
})
- known_tx.append(self.hashout_hex(tx_hash))
+ known_tx.append(tx_hash)
# todo: sort them really...
txpoints = sorted(txpoints, key=operator.itemgetter("timestamp"))
# find subset.
# TODO: do not compute this on client request, better store the hash tree of each block in a database...
- merkle = map(decode, merkle)
- target_hash = decode(tx_hash)
+ merkle = map(hash_decode, merkle)
+ target_hash = hash_decode(tx_hash)
s = []
while len(merkle) != 1:
while merkle:
new_hash = Hash(merkle[0] + merkle[1])
if merkle[0] == target_hash:
- s.append(encode(merkle[1]))
+ s.append(hash_encode(merkle[1]))
target_hash = new_hash
elif merkle[1] == target_hash:
- s.append(encode(merkle[0]))
+ s.append(hash_encode(merkle[0]))
target_hash = new_hash
n.append(new_hash)
merkle = merkle[2:]
self.address_queue = Queue()
self.dbpath = config.get('leveldb', 'path')
+ self.pruning_limit = config.getint('leveldb', 'pruning_limit')
+ self.db_version = 1 # increase this when database needs to be updated
self.dblock = threading.Lock()
try:
- self.db = leveldb.LevelDB(self.dbpath)
+ self.db = leveldb.LevelDB(self.dbpath, paranoid_checks=True)
except:
traceback.print_exc(file=sys.stdout)
self.shared.stop()
config.get('bitcoind', 'host'),
config.get('bitcoind', 'port'))
+ while True:
+ try:
+ self.bitcoind('getinfo')
+ break
+ except:
+ print_log('cannot contact bitcoind...')
+ time.sleep(5)
+ continue
+
self.height = 0
self.is_test = False
self.sent_height = 0
try:
hist = self.deserialize(self.db.Get('height'))
- self.last_hash, self.height, _ = hist[0]
- print_log("hist", hist)
+ self.last_hash, self.height, db_version = hist[0]
+ print_log("Database version", self.db_version)
+ print_log("Blockchain height", self.height)
except:
- #traceback.print_exc(file=sys.stdout)
+ traceback.print_exc(file=sys.stdout)
print_log('initializing database')
self.height = 0
self.last_hash = '000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f'
+ db_version = self.db_version
+
+ # check version
+ if self.db_version != db_version:
+ print_log("Your database '%s' is deprecated. Please create a new database"%self.dbpath)
+ self.shared.stop()
+ return
# catch_up headers
self.init_headers(self.height)
shared.stop()
sys.exit(0)
- print_log("blockchain is up to date.")
+ print_log("Blockchain is up to date.")
+ self.memorypool_update()
+ print_log("Memory pool initialized.")
threading.Timer(10, self.main_iteration).start()
def serialize(self, h):
s = ''
for txid, txpos, height in h:
- s += txid + int_to_hex(txpos, 4) + int_to_hex(height, 4)
- return s.decode('hex')
+ s += self.serialize_item(txid, txpos, height)
+ return s
+
+ def serialize_item(self, txid, txpos, height, spent=chr(0)):
+ s = (txid + int_to_hex(txpos, 4) + int_to_hex(height, 3)).decode('hex') + spent
+ return s
+
+ def deserialize_item(self,s):
+ txid = s[0:32].encode('hex')
+ txpos = int(rev_hex(s[32:36].encode('hex')), 16)
+ height = int(rev_hex(s[36:39].encode('hex')), 16)
+ spent = s[39:40]
+ return (txid, txpos, height, spent)
def deserialize(self, s):
h = []
while s:
- txid = s[0:32].encode('hex')
- txpos = int(rev_hex(s[32:36].encode('hex')), 16)
- height = int(rev_hex(s[36:40].encode('hex')), 16)
+ txid, txpos, height, spent = self.deserialize_item(s[0:40])
h.append((txid, txpos, height))
- s = s[40:]
+ if spent == chr(1):
+ txid, txpos, height, spent = self.deserialize_item(s[40:80])
+ h.append((txid, txpos, height))
+ s = s[80:]
return h
def block2header(self, b):
vds = deserialize.BCDataStream()
vds.write(raw_tx.decode('hex'))
-
- return deserialize.parse_Transaction(vds, is_coinbase=False)
+ try:
+ return deserialize.parse_Transaction(vds, is_coinbase=False)
+ except:
+ print_log("ERROR: cannot parse", txid)
+ return None
def get_history(self, addr, cache_only=False):
with self.cache_lock:
with self.dblock:
try:
- hash_160 = bc_address_to_hash_160(addr)
- hist = self.deserialize(self.db.Get(hash_160))
+ hist = self.deserialize(self.db.Get(addr))
is_known = True
except:
hist = []
is_known = False
- # should not be necessary
- hist.sort(key=lambda tup: tup[1])
- # check uniqueness too...
+ # sort history, because redeeming transactions are next to the corresponding txout
+ hist.sort(key=lambda tup: tup[2])
# add memory pool
with self.mempool_lock:
for txid in self.mempool_hist.get(addr, []):
hist.append((txid, 0, 0))
- hist = map(lambda x: {'tx_hash': x[0], 'height': x[2]}, hist)
+ # uniqueness
+ hist = set(map(lambda x: (x[0], x[2]), hist))
+
+ # convert to dict
+ hist = map(lambda x: {'tx_hash': x[0], 'height': x[1]}, hist)
+
# add something to distinguish between unused and empty addresses
if hist == [] and is_known:
hist = ['*']
return {"block_height": height, "merkle": s, "pos": tx_pos}
+
def add_to_history(self, addr, tx_hash, tx_pos, tx_height):
# keep it sorted
- s = (tx_hash + int_to_hex(tx_pos, 4) + int_to_hex(tx_height, 4)).decode('hex')
+ s = self.serialize_item(tx_hash, tx_pos, tx_height) + 40*chr(0)
+ assert len(s) == 80
serialized_hist = self.batch_list[addr]
- l = len(serialized_hist)/40
+ l = len(serialized_hist)/80
for i in range(l-1, -1, -1):
- item = serialized_hist[40*i:40*(i+1)]
- item_height = int(rev_hex(item[36:40].encode('hex')), 16)
- if item_height < tx_height:
- serialized_hist = serialized_hist[0:40*(i+1)] + s + serialized_hist[40*(i+1):]
+ item = serialized_hist[80*i:80*(i+1)]
+ item_height = int(rev_hex(item[36:39].encode('hex')), 16)
+ if item_height <= tx_height:
+ serialized_hist = serialized_hist[0:80*(i+1)] + s + serialized_hist[80*(i+1):]
break
else:
serialized_hist = s + serialized_hist
txo = (tx_hash + int_to_hex(tx_pos, 4)).decode('hex')
self.batch_txio[txo] = addr
- def remove_from_history(self, addr, tx_hash, tx_pos):
- txi = (tx_hash + int_to_hex(tx_pos, 4)).decode('hex')
- if addr is None:
- try:
- addr = self.batch_txio[txi]
- except:
- raise BaseException(tx_hash, tx_pos)
+
+ def revert_add_to_history(self, addr, tx_hash, tx_pos, tx_height):
serialized_hist = self.batch_list[addr]
+ s = self.serialize_item(tx_hash, tx_pos, tx_height) + 40*chr(0)
+ if serialized_hist.find(s) == -1: raise
+ serialized_hist = serialized_hist.replace(s, '')
+ self.batch_list[addr] = serialized_hist
+
+
- l = len(serialized_hist)/40
+ def prune_history(self, addr, undo):
+ # remove items that have bit set to one
+ if undo.get(addr) is None: undo[addr] = []
+
+ serialized_hist = self.batch_list[addr]
+ l = len(serialized_hist)/80
+ for i in range(l):
+ if len(serialized_hist)/80 < self.pruning_limit: break
+ item = serialized_hist[80*i:80*(i+1)]
+ if item[39:40] == chr(1):
+ assert item[79:80] == chr(2)
+ serialized_hist = serialized_hist[0:80*i] + serialized_hist[80*(i+1):]
+ undo[addr].append(item) # items are ordered
+ self.batch_list[addr] = serialized_hist
+
+
+ def revert_prune_history(self, addr, undo):
+ # restore removed items
+ serialized_hist = self.batch_list[addr]
+
+ if undo.get(addr) is not None:
+ itemlist = undo.pop(addr)
+ else:
+ return
+
+ if not itemlist: return
+
+ l = len(serialized_hist)/80
+ tx_item = ''
+ for i in range(l-1, -1, -1):
+ if tx_item == '':
+ if not itemlist:
+ break
+ else:
+ tx_item = itemlist.pop(-1) # get the last element
+ tx_height = int(rev_hex(tx_item[36:39].encode('hex')), 16)
+
+ item = serialized_hist[80*i:80*(i+1)]
+ item_height = int(rev_hex(item[36:39].encode('hex')), 16)
+
+ if item_height < tx_height:
+ serialized_hist = serialized_hist[0:80*(i+1)] + tx_item + serialized_hist[80*(i+1):]
+ tx_item = ''
+
+ else:
+ serialized_hist = ''.join(itemlist) + tx_item + serialized_hist
+
+ self.batch_list[addr] = serialized_hist
+
+
+ def set_spent_bit(self, addr, txi, is_spent, txid=None, index=None, height=None):
+ serialized_hist = self.batch_list[addr]
+ l = len(serialized_hist)/80
for i in range(l):
- item = serialized_hist[40*i:40*(i+1)]
+ item = serialized_hist[80*i:80*(i+1)]
if item[0:36] == txi:
- height = int(rev_hex(item[36:40].encode('hex')), 16)
- serialized_hist = serialized_hist[0:40*i] + serialized_hist[40*(i+1):]
+ if is_spent:
+ new_item = item[0:39] + chr(1) + self.serialize_item(txid, index, height, chr(2))
+ else:
+ new_item = item[0:39] + chr(0) + chr(0)*40
+ serialized_hist = serialized_hist[0:80*i] + new_item + serialized_hist[80*(i+1):]
break
else:
+ self.shared.stop()
hist = self.deserialize(serialized_hist)
- raise BaseException("prevout not found", addr, hist, tx_hash, tx_pos)
+ raise BaseException("prevout not found", addr, hist, txi.encode('hex'))
self.batch_list[addr] = serialized_hist
- return height, addr
+
+
+ def unset_spent_bit(self, addr, txi):
+ self.set_spent_bit(addr, txi, False)
+ self.batch_txio[txi] = addr
+
def deserialize_block(self, block):
txlist = block.get('tx')
is_coinbase = True
for raw_tx in txlist:
tx_hash = hash_encode(Hash(raw_tx.decode('hex')))
- tx_hashes.append(tx_hash)
vds = deserialize.BCDataStream()
vds.write(raw_tx.decode('hex'))
- tx = deserialize.parse_Transaction(vds, is_coinbase)
+ try:
+ tx = deserialize.parse_Transaction(vds, is_coinbase)
+ except:
+ print_log("ERROR: cannot parse", tx_hash)
+ continue
+ tx_hashes.append(tx_hash)
txdict[tx_hash] = tx
is_coinbase = False
return tx_hashes, txdict
t00 = time.time()
+ # undo info
+ if revert:
+ undo_info = self.get_undo_info(block_height)
+ else:
+ undo_info = {}
+
+
if not revert:
# read addresses of tx inputs
for tx in txdict.values():
for txi in block_inputs:
try:
addr = self.db.Get(txi)
- except:
+ except KeyError:
# the input could come from the same block
continue
+ except:
+ traceback.print_exc(file=sys.stdout)
+ self.shared.stop()
+ raise
+
self.batch_txio[txi] = addr
addr_to_read.append(addr)
for x in tx.get('outputs'):
txo = (txid + int_to_hex(x.get('index'), 4)).decode('hex')
block_outputs.append(txo)
+ addr_to_read.append( x.get('address') )
+
+ undo = undo_info.get(txid)
+ for i, x in enumerate(tx.get('inputs')):
+ addr = undo['prev_addr'][i]
+ addr_to_read.append(addr)
+
+
+
+
# read histories of addresses
for txid, tx in txdict.items():
for x in tx.get('outputs'):
- hash_160 = bc_address_to_hash_160(x.get('address'))
- addr_to_read.append(hash_160)
+ addr_to_read.append(x.get('address'))
addr_to_read.sort()
for addr in addr_to_read:
try:
self.batch_list[addr] = self.db.Get(addr)
- except:
+ except KeyError:
self.batch_list[addr] = ''
+ except:
+ traceback.print_exc(file=sys.stdout)
+ self.shared.stop()
+ raise
- if revert:
- undo_info = self.get_undo_info(block_height)
- # print "undo", block_height, undo_info
- else:
- undo_info = {}
# process
t1 = time.time()
if revert:
tx_hashes = tx_hashes[::-1]
+
+
for txid in tx_hashes: # must be ordered
tx = txdict[txid]
if not revert:
- undo = []
- for x in tx.get('inputs'):
- prevout_height, prevout_addr = self.remove_from_history(None, x.get('prevout_hash'), x.get('prevout_n'))
- undo.append((prevout_height, prevout_addr))
- undo_info[txid] = undo
+ undo = { 'prev_addr':[] } # contains the list of pruned items for each address in the tx; also, 'prev_addr' is a list of prev addresses
+
+ prev_addr = []
+ for i, x in enumerate(tx.get('inputs')):
+ txi = (x.get('prevout_hash') + int_to_hex(x.get('prevout_n'), 4)).decode('hex')
+ addr = self.batch_txio[txi]
+
+ # add redeem item to the history.
+ # add it right next to the input txi? this will break history sorting, but it's ok if I neglect tx inputs during search
+ self.set_spent_bit(addr, txi, True, txid, i, block_height)
+
+ # when I prune, prune a pair
+ self.prune_history(addr, undo)
+ prev_addr.append(addr)
+ undo['prev_addr'] = prev_addr
+
+ # here I add only the outputs to history; maybe I want to add inputs too (that's in the other loop)
for x in tx.get('outputs'):
- hash_160 = bc_address_to_hash_160(x.get('address'))
- self.add_to_history(hash_160, txid, x.get('index'), block_height)
+ addr = x.get('address')
+ self.add_to_history(addr, txid, x.get('index'), block_height)
+ self.prune_history(addr, undo) # prune here because we increased the length of the history
+
+ undo_info[txid] = undo
else:
+
+ undo = undo_info.pop(txid)
+
for x in tx.get('outputs'):
- hash_160 = bc_address_to_hash_160(x.get('address'))
- self.remove_from_history(hash_160, txid, x.get('index'))
+ addr = x.get('address')
+ self.revert_prune_history(addr, undo)
+ self.revert_add_to_history(addr, txid, x.get('index'), block_height)
+
+ prev_addr = undo.pop('prev_addr')
+ for i, x in enumerate(tx.get('inputs')):
+ addr = prev_addr[i]
+ self.revert_prune_history(addr, undo)
+ txi = (x.get('prevout_hash') + int_to_hex(x.get('prevout_n'), 4)).decode('hex')
+ self.unset_spent_bit(addr, txi)
- i = 0
- for x in tx.get('inputs'):
- prevout_height, prevout_addr = undo_info.get(txid)[i]
- i += 1
+ assert undo == {}
- # read the history into batch list
- if self.batch_list.get(prevout_addr) is None:
- self.batch_list[prevout_addr] = self.db.Get(prevout_addr)
+ if revert:
+ assert undo_info == {}
- # re-add them to the history
- self.add_to_history(prevout_addr, x.get('prevout_hash'), x.get('prevout_n'), prevout_height)
- # print_log("new hist for", hash_160_to_bc_address(prevout_addr), self.deserialize(self.batch_list[prevout_addr]) )
# write
max_len = 0
batch = leveldb.WriteBatch()
for addr, serialized_hist in self.batch_list.items():
batch.Put(addr, serialized_hist)
- l = len(serialized_hist)
+ l = len(serialized_hist)/80
if l > max_len:
max_len = l
max_addr = addr
else:
# restore spent inputs
for txio, addr in self.batch_txio.items():
+ # print "restoring spent input", repr(txio)
batch.Put(txio, addr)
# delete spent outputs
for txo in block_outputs:
batch.Delete(txo)
# add the max
- batch.Put('height', self.serialize([(block_hash, block_height, 0)]))
+ batch.Put('height', self.serialize([(block_hash, block_height, self.db_version)]))
# actual write
self.db.Write(batch, sync=sync)
"read:%0.2f " % (t1 - t00),
"proc:%.2f " % (t2-t1),
"write:%.2f " % (t3-t2),
- "max:", max_len, hash_160_to_bc_address(max_addr))
+ "max:", max_len, max_addr)
- for h160 in self.batch_list.keys():
- addr = hash_160_to_bc_address(h160)
+ for addr in self.batch_list.keys():
self.invalidate_cache(addr)
def add_request(self, request):
address = params[1]
if password == self.config.get('server', 'password'):
self.watched_addresses.remove(address)
- print_log('unsubscribed', address)
+ # print_log('unsubscribed', address)
result = "ok"
else:
print_log('incorrect password')
tx_height = params[1]
result = self.get_merkle(tx_hash, tx_height)
except BaseException, e:
- error = str(e) + ': ' + tx_hash
- print_log("error:", error)
+ error = str(e) + ': ' + repr(params)
+ print_log("get_merkle error:", error)
elif method == 'blockchain.transaction.get':
try:
tx_hash = params[0]
result = self.bitcoind('getrawtransaction', [tx_hash, 0])
except BaseException, e:
- error = str(e) + ': ' + tx_hash
- print_log("error:", error)
+ error = str(e) + ': ' + repr(params)
+ print_log("tx get error:", error)
else:
error = "unknown method:%s" % method
return -1
if error:
- response = {'id': message_id, 'error': error}
+ self.push_response({'id': message_id, 'error': error})
elif result != '':
- response = {'id': message_id, 'result': result}
- self.push_response(response)
+ self.push_response({'id': message_id, 'result': result})
def watch_address(self, addr):
if addr not in self.watched_addresses:
def memorypool_update(self):
mempool_hashes = self.bitcoind('getrawmempool')
+ touched_addresses = []
for tx_hash in mempool_hashes:
if tx_hash in self.mempool_hashes:
continue
if not tx:
continue
+ mpa = self.mempool_addresses.get(tx_hash, [])
for x in tx.get('inputs'):
- txi = (x.get('prevout_hash') + int_to_hex(x.get('prevout_n'), 4)).decode('hex')
- try:
- h160 = self.db.Get(txi)
- addr = hash_160_to_bc_address(h160)
- except:
- continue
- l = self.mempool_addresses.get(tx_hash, [])
- if addr not in l:
- l.append(addr)
- self.mempool_addresses[tx_hash] = l
+ # we assume that the input address can be parsed by deserialize(); this is true for Electrum transactions
+ addr = x.get('address')
+ if addr and addr not in mpa:
+ mpa.append(addr)
+ touched_addresses.append(addr)
for x in tx.get('outputs'):
addr = x.get('address')
- l = self.mempool_addresses.get(tx_hash, [])
- if addr not in l:
- l.append(addr)
- self.mempool_addresses[tx_hash] = l
+ if addr and addr not in mpa:
+ mpa.append(addr)
+ touched_addresses.append(addr)
+ self.mempool_addresses[tx_hash] = mpa
self.mempool_hashes.append(tx_hash)
# remove older entries from mempool_hashes
for tx_hash, addresses in self.mempool_addresses.items():
if tx_hash not in self.mempool_hashes:
self.mempool_addresses.pop(tx_hash)
+ for addr in addresses:
+ touched_addresses.append(addr)
- # rebuild histories
+ # rebuild mempool histories
new_mempool_hist = {}
for tx_hash, addresses in self.mempool_addresses.items():
for addr in addresses:
h.append(tx_hash)
new_mempool_hist[addr] = h
- for addr in new_mempool_hist.keys():
- if addr in self.mempool_hist.keys():
- if self.mempool_hist[addr] != new_mempool_hist[addr]:
- self.invalidate_cache(addr)
- else:
- self.invalidate_cache(addr)
-
with self.mempool_lock:
self.mempool_hist = new_mempool_hist
+ # invalidate cache for touched addresses
+ for addr in touched_addresses:
+ self.invalidate_cache(addr)
+
+
def invalidate_cache(self, address):
with self.cache_lock:
- if 'address' in self.history_cache:
+ if address in self.history_cache:
print_log("cache: invalidating", address)
self.history_cache.pop(address)
if address in self.watched_addresses:
+ # TODO: update cache here. if new value equals cached value, do not send notification
self.address_queue.put(address)
def main_iteration(self):
t2 = time.time()
self.memorypool_update()
- t3 = time.time()
- # print "mempool:", len(self.mempool_addresses), len(self.mempool_hist), "%.3fs"%(t3 - t2)
if self.sent_height != self.height:
self.sent_height = self.height
d['prevout_n'] = vds.read_uint32()
scriptSig = vds.read_bytes(vds.read_compact_size())
d['sequence'] = vds.read_uint32()
- # actually I don't need that at all
- # if not is_coinbase: d['address'] = extract_public_key(scriptSig)
- # d['script'] = decode_script(scriptSig)
+
+ if scriptSig:
+ pubkeys, signatures, address = get_address_from_input_script(scriptSig)
+ else:
+ pubkeys = []
+ signatures = []
+ address = None
+
+ d['address'] = address
+ d['signatures'] = signatures
+
return d
d = {}
d['value'] = vds.read_int64()
scriptPubKey = vds.read_bytes(vds.read_compact_size())
- d['address'] = extract_public_key(scriptPubKey)
- #d['script'] = decode_script(scriptPubKey)
+ d['address'] = get_address_from_output_script(scriptPubKey)
d['raw_output_script'] = scriptPubKey.encode('hex')
d['index'] = i
return d
"OP_WITHIN", "OP_RIPEMD160", "OP_SHA1", "OP_SHA256", "OP_HASH160",
"OP_HASH256", "OP_CODESEPARATOR", "OP_CHECKSIG", "OP_CHECKSIGVERIFY", "OP_CHECKMULTISIG",
"OP_CHECKMULTISIGVERIFY",
- ("OP_SINGLEBYTE_END", 0xF0),
- ("OP_DOUBLEBYTE_BEGIN", 0xF000),
- "OP_PUBKEY", "OP_PUBKEYHASH",
- ("OP_INVALIDOPCODE", 0xFFFF),
+ "OP_NOP1", "OP_NOP2", "OP_NOP3", "OP_NOP4", "OP_NOP5", "OP_NOP6", "OP_NOP7", "OP_NOP8", "OP_NOP9", "OP_NOP10",
+ ("OP_INVALIDOPCODE", 0xFF),
])
vch = None
opcode = ord(bytes[i])
i += 1
- if opcode >= opcodes.OP_SINGLEBYTE_END:
- opcode <<= 8
- opcode |= ord(bytes[i])
- i += 1
if opcode <= opcodes.OP_PUSHDATA4:
nSize = opcode
elif opcode == opcodes.OP_PUSHDATA4:
(nSize,) = struct.unpack_from('<I', bytes, i)
i += 4
- vch = bytes[i:i+nSize]
- i += nSize
+ if i+nSize > len(bytes):
+ vch = "_INVALID_"+bytes[i:]
+ i = len(bytes)
+ else:
+ vch = bytes[i:i+nSize]
+ i += nSize
yield (opcode, vch, i)
def script_GetOpName(opcode):
+ try:
return (opcodes.whatis(opcode)).replace("OP_", "")
+ except KeyError:
+ return "InvalidOp_"+str(opcode)
def decode_script(bytes):
return True
-def extract_public_key(bytes):
- decoded = list(script_GetOp(bytes))
+
+def get_address_from_input_script(bytes):
+ try:
+ decoded = [ x for x in script_GetOp(bytes) ]
+ except:
+ # coinbase transactions raise an exception
+ return [], [], None
# non-generated TxIn transactions push a signature
# (seventy-something bytes) and then their public key
- # (65 bytes) onto the stack:
- match = [opcodes.OP_PUSHDATA4, opcodes.OP_PUSHDATA4]
+ # (33 or 65 bytes) onto the stack:
+
+ match = [ opcodes.OP_PUSHDATA4, opcodes.OP_PUSHDATA4 ]
if match_decoded(decoded, match):
- return public_key_to_bc_address(decoded[1][1])
+ return None, None, public_key_to_bc_address(decoded[1][1])
+
+ # p2sh transaction, 2 of n
+ match = [ opcodes.OP_0 ]
+ while len(match) < len(decoded):
+ match.append(opcodes.OP_PUSHDATA4)
+
+ if match_decoded(decoded, match):
+
+ redeemScript = decoded[-1][1]
+ num = len(match) - 2
+ signatures = map(lambda x:x[1].encode('hex'), decoded[1:-1])
+ dec2 = [ x for x in script_GetOp(redeemScript) ]
+
+ # 2 of 2
+ match2 = [ opcodes.OP_2, opcodes.OP_PUSHDATA4, opcodes.OP_PUSHDATA4, opcodes.OP_2, opcodes.OP_CHECKMULTISIG ]
+ if match_decoded(dec2, match2):
+ pubkeys = [ dec2[1][1].encode('hex'), dec2[2][1].encode('hex') ]
+ return pubkeys, signatures, hash_160_to_bc_address(hash_160(redeemScript), 5)
+
+ # 2 of 3
+ match2 = [ opcodes.OP_2, opcodes.OP_PUSHDATA4, opcodes.OP_PUSHDATA4, opcodes.OP_PUSHDATA4, opcodes.OP_3, opcodes.OP_CHECKMULTISIG ]
+ if match_decoded(dec2, match2):
+ pubkeys = [ dec2[1][1].encode('hex'), dec2[2][1].encode('hex'), dec2[3][1].encode('hex') ]
+ return pubkeys, signatures, hash_160_to_bc_address(hash_160(redeemScript), 5)
+
+ return [], [], None
+
+
+def get_address_from_output_script(bytes):
+ try:
+ decoded = [ x for x in script_GetOp(bytes) ]
+ except:
+ return "None"
# The Genesis Block, self-payments, and pay-by-IP-address payments look like:
# 65 BYTES:... CHECKSIG
if match_decoded(decoded, match):
return hash_160_to_bc_address(decoded[2][1])
- #raise BaseException("address not found in script") see ce35795fb64c268a52324b884793b3165233b1e6d678ccaadf760628ec34d76b
+ # p2sh
+ match = [ opcodes.OP_HASH160, opcodes.OP_PUSHDATA4, opcodes.OP_EQUAL ]
+ if match_decoded(decoded, match):
+ addr = hash_160_to_bc_address(decoded[1][1],5)
+ return addr
+
return "None"
if config.get('server', 'coin') == 'litecoin':
self.prepend = 'EL_'
self.pruning = config.get('server', 'backend') == 'leveldb'
+ if self.pruning:
+ self.pruning_limit = config.get('leveldb', 'pruning_limit')
self.nick = self.prepend + self.nick
def get_peers(self):
def getname(self):
s = 'v' + VERSION + ' '
if self.pruning:
- s += 'p '
- if self.stratum_tcp_port:
- s += 't' + self.stratum_tcp_port + ' '
- if self.stratum_http_port:
- s += 'h' + self.stratum_http_port + ' '
- if self.stratum_tcp_port:
- s += 's' + self.stratum_tcp_ssl_port + ' '
- if self.stratum_http_port:
- s += 'g' + self.stratum_http_ssl_port + ' '
+ s += 'p' + self.pruning_limit + ' '
+
+ def add_port(letter, number):
+ DEFAULT_PORTS = {'t':'50001', 's':'50002', 'h':'8081', 'g':'8082'}
+ if not number: return ''
+ if DEFAULT_PORTS[letter] == number:
+ return letter + ' '
+ else:
+ return letter + number + ' '
+
+ s += add_port('t',self.stratum_tcp_port)
+ s += add_port('h',self.stratum_http_port)
+ s += add_port('s',self.stratum_tcp_ssl_port)
+ s += add_port('g',self.stratum_http_ssl_port)
return s
def run(self):
time.sleep(10)
continue
+ self.message = ''
try:
s.send('USER electrum 0 * :' + self.host + ' ' + ircname + '\n')
s.send('NICK ' + self.nick + '\n')
s.send('JOIN #electrum\n')
- sf = s.makefile('r', 0)
t = 0
while not self.processor.shared.stopped():
- line = sf.readline().rstrip('\r\n').split()
- if not line:
- continue
- if line[0] == 'PING':
- s.send('PONG ' + line[1] + '\n')
- elif '353' in line: # answer to /names
- k = line.index('353')
- for item in line[k+1:]:
- if item.startswith(self.prepend):
- s.send('WHO %s\n' % item)
- elif '352' in line: # answer to /who
- # warning: this is a horrible hack which apparently works
- k = line.index('352')
- ip = socket.gethostbyname(line[k+4])
- name = line[k+6]
- host = line[k+9]
- ports = line[k+10:]
- self.peers[name] = (ip, host, ports)
+ try:
+ data = s.recv(2048)
+ except:
+ print_log( "irc: socket error" )
+ time.sleep(1)
+ break
+
+ self.message += data
+
+ while self.message.find('\n') != -1:
+ pos = self.message.find('\n')
+ line = self.message[0:pos]
+ self.message = self.message[pos+1:]
+ line = line.strip('\r')
+ if not line:
+ continue
+ line = line.split()
+ if line[0] == 'PING':
+ s.send('PONG ' + line[1] + '\n')
+ elif '353' in line: # answer to /names
+ k = line.index('353')
+ for item in line[k+1:]:
+ if item.startswith(self.prepend):
+ s.send('WHO %s\n' % item)
+ elif '352' in line: # answer to /who
+ # warning: this is a horrible hack which apparently works
+ k = line.index('352')
+ try:
+ ip = socket.gethostbyname(line[k+4])
+ except:
+ print_log("gethostbyname error", line[k+4])
+ continue
+ name = line[k+6]
+ host = line[k+9]
+ ports = line[k+10:]
+ self.peers[name] = (ip, host, ports)
+
if time.time() - t > 5*60:
self.processor.push_response({'method': 'server.peers', 'params': [self.get_peers()]})
s.send('NAMES #electrum\n')
except:
traceback.print_exc(file=sys.stdout)
finally:
- sf.close()
s.close()
print_log("quitting IRC")
elif method == 'server.cache':
p = self.dispatcher.request_dispatcher.processors['blockchain']
- result = len(repr(p.store.tx_cache))
+ result = len(repr(p.history_cache))
elif method == 'server.load':
p = self.dispatcher.request_dispatcher.processors['blockchain']
#ssl_certfile = /path/to/electrum-server.crt
#ssl_keyfile = /path/to/electrum-server.key
-#default backend is abe
-#backend = leveldb
+# default backend is leveldb (pruning server)
+backend = leveldb
-#for abe only, number of requests per single hash
-#limit = 1000
+[leveldb]
+path = /path/to/your/database
+# for each address, history will be pruned if it is longer than this limit
+pruning_limit = 100
-[database]
-type = MySQLdb
-database = electrum
-username = electrum
-password = secret
+
+# ABE configuration for full servers
+# Backends other than level db are deprecated and currently unsupported
+
+# number of requests per single hash
+# limit = 1000
+
+# [database]
+# type = MySQLdb
+# database = electrum
+# username = electrum
+# password = secret
# [database]
# type = psycopg2
# type = sqlite3
# database = electrum.sqlite
-# comment database section above
-# if you use backend = leveldb
-# [leveldb]
-# path = /path/to/your/database
[bitcoind]
host = localhost
self.internal_ids = {}
self.internal_id = 1
self.lock = threading.Lock()
+ self.idlock = threading.Lock()
self.sessions = []
self.processors = {}
return x
def get_session_id(self, internal_id):
- with self.lock:
+ with self.idlock:
return self.internal_ids.pop(internal_id)
def store_session_id(self, session, msgid):
- with self.lock:
+ with self.idlock:
self.internal_ids[self.internal_id] = session, msgid
r = self.internal_id
self.internal_id += 1
suffix = method.split('.')[-1]
if session is not None:
- is_new = session.protocol_version >= 0.5
if suffix == 'subscribe':
session.subscribe_to_service(method, params)
except:
pass
- #if session.protocol_version < 0.6:
- # print_log("stopping session from old client", session.protocol_version)
- # session.stop()
def get_sessions(self):
with self.lock:
def collect_garbage(self):
# Deep copy entire sessions list and blank it
- # This is done to minimise lock contention
+ # This is done to minimize lock contention
with self.lock:
sessions = self.sessions[:]
- self.sessions = []
+
+ active_sessions = []
+ now = time.time()
for session in sessions:
- if not session.stopped():
+ if not session.stopped() and (now - session.time) < 1000:
# If session is still alive then re-add it back
# to our internal register
- self.add_session(session)
+ active_sessions.append(session)
+
+ with self.lock:
+ self.sessions = active_sessions[:]
+
class Session:
config.set('server', 'report_host', '')
config.set('server', 'stratum_tcp_port', '50001')
config.set('server', 'stratum_http_port', '8081')
- config.set('server', 'stratum_tcp_ssl_port', '50002')
- config.set('server', 'stratum_http_ssl_port', '8082')
+ config.set('server', 'stratum_tcp_ssl_port', '')
+ config.set('server', 'stratum_http_ssl_port', '')
config.set('server', 'report_stratum_tcp_port', '')
config.set('server', 'report_stratum_http_port', '')
config.set('server', 'report_stratum_tcp_ssl_port', '')
config.set('server', 'irc_nick', '')
config.set('server', 'coin', '')
config.set('server', 'datadir', '')
- config.add_section('database')
- config.set('database', 'type', 'psycopg2')
- config.set('database', 'database', 'abe')
- config.set('database', 'limit', '1000')
- config.set('server', 'backend', 'abe')
+
+ # use leveldb as default
+ config.set('server', 'backend', 'leveldb')
+ config.add_section('leveldb')
+ config.set('leveldb', 'path', '/dev/shm/electrum_db')
+ config.set('leveldb', 'pruning_limit', '100')
for path in ('/etc/', ''):
filename = path + 'electrum.conf'
msg = ''
while True:
o = s.recv(1024)
+ if not o: break
msg += o
if msg.find('\n') != -1:
break
self.send_response(200)
self.send_header('Allow', 'GET, POST, OPTIONS')
self.send_header('Access-Control-Allow-Origin', '*')
- self.send_header('Access-Control-Allow-Headers', '*')
+ self.send_header('Access-Control-Allow-Headers', 'Cache-Control, Content-Language, Content-Type, Expires, Last-Modified, Pragma, Accept-Language, Accept, Origin')
self.send_header('Content-Length', '0')
self.end_headers()
import socket
import threading
import time
+import traceback, sys
from processor import Session, Dispatcher
from utils import print_log
def __init__(self, connection, address, use_ssl, ssl_certfile, ssl_keyfile):
Session.__init__(self)
+ self.use_ssl = use_ssl
if use_ssl:
import ssl
self._connection = ssl.wrap_socket(
server_side=True,
certfile=ssl_certfile,
keyfile=ssl_keyfile,
- ssl_version=ssl.PROTOCOL_SSLv23)
+ ssl_version=ssl.PROTOCOL_SSLv23,
+ do_handshake_on_connect=False)
else:
self._connection = connection
self.address = address[0]
self.name = "TCP " if not use_ssl else "SSL "
+ self.response_queue = queue.Queue()
+
+ def do_handshake(self):
+ if self.use_ssl:
+ self._connection.do_handshake()
def connection(self):
if self.stopped():
return self._connection
def stop(self):
+ if self.stopped():
+ return
+
+ try:
+ self._connection.shutdown(socket.SHUT_RDWR)
+ except:
+ # print_log("problem shutting down", self.address)
+ # traceback.print_exc(file=sys.stdout)
+ pass
+
self._connection.close()
- #print "Terminating connection:", self.address
with self.lock:
self._stopped = True
def send_response(self, response):
- data = json.dumps(response) + "\n"
- # Possible race condition here by having session
- # close connection?
- # I assume Python connections are thread safe interfaces
- try:
- connection = self.connection()
- while data:
- l = connection.send(data)
- data = data[l:]
- except:
- self.stop()
+ self.response_queue.put(response)
+
+
+class TcpClientResponder(threading.Thread):
+
+ def __init__(self, session):
+ self.session = session
+ threading.Thread.__init__(self)
+
+ def run(self):
+ while not self.session.stopped():
+ response = self.session.response_queue.get()
+ data = json.dumps(response) + "\n"
+ try:
+ while data:
+ l = self.session.connection().send(data)
+ data = data[l:]
+ except:
+ self.session.stop()
+
class TcpClientRequestor(threading.Thread):
threading.Thread.__init__(self)
def run(self):
+ try:
+ self.session.do_handshake()
+ except:
+ return
+
while not self.shared.stopped():
if not self.update():
break
self.ssl_certfile = ssl_certfile
def run(self):
- if self.use_ssl:
- print_log("TCP/SSL server started.")
- else:
- print_log("TCP server started.")
+ print_log( ("SSL" if self.use_ssl else "TCP") + " server started on port %d"%self.port)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind((self.host, self.port))
- sock.listen(1)
+ sock.listen(5)
+
while not self.shared.stopped():
+
+ #if self.use_ssl: print_log("SSL: socket listening")
try:
- session = TcpSession(*sock.accept(), use_ssl=self.use_ssl, ssl_certfile=self.ssl_certfile, ssl_keyfile=self.ssl_keyfile)
+ connection, address = sock.accept()
+ except:
+ traceback.print_exc(file=sys.stdout)
+ time.sleep(0.1)
+ continue
+
+ #if self.use_ssl: print_log("SSL: new session", address)
+ try:
+ session = TcpSession(connection, address, use_ssl=self.use_ssl, ssl_certfile=self.ssl_certfile, ssl_keyfile=self.ssl_keyfile)
except BaseException, e:
error = str(e)
- print_log("cannot start TCP session", error)
+ print_log("cannot start TCP session", error, address)
+ connection.close()
+ time.sleep(0.1)
continue
+
self.dispatcher.add_session(session)
self.dispatcher.collect_garbage()
client_req = TcpClientRequestor(self.dispatcher, session)
client_req.start()
+ responder = TcpClientResponder(session)
+ responder.start()
def hex_to_int(s):
- return eval('0x' + s[::-1].encode('hex'))
+ return int('0x' + s[::-1].encode('hex'), 16)
def header_from_string(s):
############ functions from pywallet #####################
-addrtype = 0
def hash_160(public_key):
return hash_160_to_bc_address(hash_160(public_key))
-def hash_160_to_bc_address(h160):
+def hash_160_to_bc_address(h160, addrtype = 0):
if h160 == 'None':
return 'None'
vh160 = chr(addrtype) + h160
return key
-def PrivKeyToSecret(privkey):
- return privkey[9:9+32]
-
-
-def SecretToASecret(secret):
- vchIn = chr(addrtype+128) + secret
- return EncodeBase58Check(vchIn)
-
-
-def ASecretToSecret(key):
- vch = DecodeBase58Check(key)
- if vch and vch[0] == chr(addrtype+128):
- return vch[1:]
- else:
- return False
########### end pywallet functions #######################
-VERSION = "0.6"
+VERSION = "0.8"