Browse Source

Initial revision

master
Neil Booth 8 years ago
commit
a3dbc68614
  1. 5
      .gitignore
  2. 19
      ACKNOWLEDGEMENTS
  3. 1
      AUTHORS
  4. 267
      HOWTO.rst
  5. 24
      LICENSE
  6. 126
      README.rst
  7. 240
      lib/coins.py
  8. 45
      lib/enum.py
  9. 115
      lib/hash.py
  10. 306
      lib/script.py
  11. 140
      lib/tx.py
  12. 66
      lib/util.py
  13. 15
      samples/scripts/NOTES
  14. 1
      samples/scripts/env/COIN
  15. 1
      samples/scripts/env/DB_DIRECTORY
  16. 1
      samples/scripts/env/FLUSH_SIZE
  17. 1
      samples/scripts/env/NETWORK
  18. 1
      samples/scripts/env/RPC_HOST
  19. 1
      samples/scripts/env/RPC_PASSWORD
  20. 1
      samples/scripts/env/RPC_PORT
  21. 1
      samples/scripts/env/RPC_USERNAME
  22. 1
      samples/scripts/env/SERVER_MAIN
  23. 1
      samples/scripts/env/USERNAME
  24. 2
      samples/scripts/log/run
  25. 3
      samples/scripts/run
  26. 0
      server/__init__.py
  27. 470
      server/db.py
  28. 54
      server/env.py
  29. 232
      server/server.py
  30. 48
      server_main.py

5
.gitignore

@ -0,0 +1,5 @@
*/__pycache__/
*/*~
*.#*
*#
*~

19
ACKNOWLEDGEMENTS

@ -0,0 +1,19 @@
Thanks to Thomas Voegtlin for creating the Electrum software and
infrastructure and for maintaining it so diligently. Electrum is the
probably the best desktop Bitcoin wallet solution for most users. My
faith in it is such that I use Electrum software to store most of my
Bitcoins.
Whilst the vast majority of the code here is my own original work and
includes some new ideas, it is very clear that the general structure
and concept are those of Electrum. Some parts of the code and ideas
of Electrum, some of which it itself took from other projects such as
Abe and pywallet, remain. Thanks to the authors of all the software
this is derived from.
Thanks to Daniel Bernstein for daemontools and other software, and to
Matthew Dillon for DragonFlyBSD. They are both deeply inspirational
people.
And of course, thanks to Satoshi for the wonderful creation that is
Bitcoin.

1
AUTHORS

@ -0,0 +1 @@
Neil Booth: creator and maintainer

267
HOWTO.rst

@ -0,0 +1,267 @@
Prerequisites
=============
ElectrumX should run on any flavour of unix. I have run it
successfully on MaxOSX and DragonFlyBSD. It won't run out-of-the-box
on Windows, but the changes required to make it do so should be
small - patches welcome.
+ Python3 ElectrumX makes heavy use of asyncio so version >=3.5 is required
+ plyvel Python interface to LevelDB. I am using plyvel-0.9.
+ aiohttp Python library for asynchronous HTTP. ElectrumX uses it for
communication with the daemon. I am using aiohttp-0.21.
While not requirements for running ElectrumX, it is intended to be run
with supervisor software such as Daniel Bernstein's daemontools, or
Gerald Pape's runit package. These make administration of secure
unix servers very easy, and I strongly recommend you install one of these
and familiarise yourself with them. The instructions below and sample
run scripts assume daemontools; adapting to runit should be trivial
for someone used to either.
When building the database form the genesis block, ElectrumX has to
flush large quantities of data to disk and to leveldb. You will have
a much nicer experience if the database directory is on an SSD than on
an HDD. Currently to around height 430,000 of the Bitcoin blockchain
the final size of the leveldb database, and other ElectrumX file
metadata comes to around 15GB. Leveldb needs a bit more for brief
periods, and the block chain is only getting longer, so I would
recommend having at least 30-40GB free space.
Running
=======
Install the prerequisites above.
Check out the code from Github::
git clone https://github.com/kyuupichan/electrumx.git
cd electrumx
I have not yet created a setup.py, so for now I suggest you run
the code from the source tree or a copy of it.
You should create a standard user account to run the server under;
your own is probably adequate unless paranoid. The paranoid might
also want to create another user account for the daemontools logging
process. The sample scripts and these instructions assume it is all
under one account which I have called 'electrumx'.
Next create a directory where the database will be stored and make it
writeable by the electrumx account. I recommend this directory live
on an SSD::
mkdir /path/to/db_directory
chown electrumx /path/to/db_directory
Next create a daemontools service directory; this only holds symlinks
(see daemontools documentation). The 'svscan' program will ensure the
servers in the directory are running by launching a 'supervise'
supervisor for the server and another for its logging process. You
can run 'svscan' under the electrumx account if that is the only one
involved (server and logger) otherwise it will need to run as root so
that the user can be switched to electrumx.
Assuming this directory is called service, you would do one of::
mkdir /service # If running svscan as root
mkdir ~/service # As electrumx if running svscan as that a/c
Next create a directory to hold the scripts that the 'supervise'
process spawned by 'svscan' will run - this directory must be readable
by the 'svscan' process. Suppose this directory is called scripts, you
might do::
mkdir -p ~/scripts/electrumx
Then copy the all sample scripts from the ElectrumX source tree there::
cp -R /path/to/repo/electrumx/samples/scripts ~/scripts/electrumx
This copies 4 things: the top level server run script, a log/ directory
with the logger run script, an env/ directory, and a NOTES file.
You need to configure the environment variables under env/ to your
setup, as explained in NOTES. ElectrumX server currently takes no
command line arguments; all of its configuration is taken from its
environment which is set up according to env/ directory (see 'envdir'
man page). Finally you need to change the log/run script to use the
directory where you want the logs to be written by multilog. The
directory need not exist as multilog will create it, but its parent
directory must exist.
Now start the 'svscan' process. This will not do much as the service
directory is still empty::
svscan ~/service & disown
svscan is now waiting for services to be added to the directory::
cd ~/service
ln -s ~/scripts/electrumx electrumx
Creating the symlink will kick off the server process almost immediately.
You can see its logs with::
tail -F /path/to/log/dir/current | tai64nlocal
Progress
========
Speed indexing the blockchain depends on your hardware of course. As
Python is single-threaded most of the time only 1 core is kept busy.
ElectrumX uses Python's asyncio to prefill a cache of future blocks
asynchronously; this keeps the CPU busy processing the chain and not
waiting for blocks to be delivered. I therefore doubt there will be
much boost in performance if the daemon is on the same host: indeed it
may even be beneficial to have the daemon on a separate machine so the
machine doing the indexing is focussing on the one task and not the
wider network.
The FLUSH_SIZE environment variable is an upper bound on how much
unflushed data is cached before writing to disk + leveldb. The
default is 4 million items, which is probably fine unless your
hardware is quite poor. If you've got a really fat machine with lots
of RAM, 10 million or even higher is likely good (I used 10 million on
Machine B below without issue so far). A higher number will have
fewer flushes and save your disk thrashing, but you don't want it so
high your machine is swapping. If your machine loses power all
synchronization since the previous flush is lost.
When syncing, ElectrumX is CPU bound over 70% of the time, with the
rest being bursts of disk activity whilst flushing. Here is my
experience with the current codebase, to given heights and rough
wall-time::
Machine A Machine B DB + Metadata
100,000 2m 30s 0 (unflushed)
150,000 35m 4m 30s 0.2 GB
180,000 1h 5m 9m 0.4 GB
245,800 3h
290,000 13h 15m 3.3 GB
Machine A: a low-spec 2011 1.6GHz AMD E-350 dual-core fanless CPU, 8GB
RAM and a DragonFlyBSD HAMMER fileystem on an SSD. It requests blocks
over the LAN from a bitcoind on machine B. FLUSH_SIZE: I changed it
several times between 1 and 5 million during the sync which causes the
above stats to be a little approximate. Initial FLUSH_SIZE was 1
million and first flush at height 126,538.
Machine B: a late 2012 iMac running El-Capitan 10.11.6, 2.9GHz
quad-core Intel i5 CPU with an HDD and 24GB RAM. Running bitcoind on
the same machine. FLUSH_SIZE of 10 million. First flush at height
195,146.
Transactions processed per second seems to gradually decrease over
time but this statistic is not currently logged and I've not looked
closely.
For chains other than bitcoin-mainnet sychronization should be much
faster.
Terminating ElectrumX
=====================
The preferred way to terminate the server process is to send it the
TERM signal. For a daemontools supervised process this is best done
by bringing it down like so::
svc -d ~/service/electrumx
If processing the blockchain the server will start the process of
flushing to disk. Once that is complete the server will exit. Be
patient as disk flushing can take a while.
ElectrumX flushes to leveldb using its transaction functionality. The
plyvel documentation claims this is atomic. I have written ElectrumX
with the intent that, to the extent this atomicity guarantee holds,
the database should not get corrupted even if the ElectrumX process if
forcibly killed or there is loss of power. The worst case is losing
unflushed in-memory blockchain processing and having to restart from
the state as of the prior successfully completed flush.
During development I have terminated ElectrumX processes in various
ways and at random times, and not once have I had any corruption as a
result of doing so. Mmy only DB corruption has been through buggy
code. If you do have any database corruption as a result of
terminating the process without modifying the code I would be very
interested in hearing details.
I have heard about corruption issues with electrum-server. I cannot
be sure but with a brief look at the code it does seem that if
interrupted at the wrong time the databases it uses could become
inconsistent.
Once the process has terminated, you can start it up again with::
svc -u ~/service/electrumx
You can see the status of a running service with::
svstat ~/service/electrumx
Of course, svscan can handle multiple services simultaneously from the
same service directory, such as a testnet or altcoin server. See the
man pages of these various commands for more information.
Understanding the Logs
======================
You can see the logs usefully like so::
tail -F /path/to/log/dir/current | tai64nlocal
Here is typical log output on startup::
2016-10-08 14:46:48.088516500 Launching ElectrumX server...
2016-10-08 14:46:49.145281500 INFO:root:ElectrumX server starting
2016-10-08 14:46:49.147215500 INFO:root:switching current directory to /var/nohist/server-test
2016-10-08 14:46:49.150765500 INFO:DB:using flush size of 1,000,000 entries
2016-10-08 14:46:49.156489500 INFO:DB:created new database Bitcoin-mainnet
2016-10-08 14:46:49.157531500 INFO:DB:flushing to levelDB 0 txs and 0 blocks to height -1 tx count: 0
2016-10-08 14:46:49.158640500 INFO:DB:flushed. Cache hits: 0/0 writes: 5 deletes: 0 elided: 0 sync: 0d 00h 00m 00s
2016-10-08 14:46:49.159508500 INFO:RPC:using RPC URL http://user:pass@192.168.0.2:8332/
2016-10-08 14:46:49.167352500 INFO:BlockCache:catching up, block cache limit 10MB...
2016-10-08 14:46:49.318374500 INFO:BlockCache:prefilled 10 blocks to height 10 daemon height: 433,401 block cache size: 2,150
2016-10-08 14:46:50.193962500 INFO:BlockCache:prefilled 4,000 blocks to height 4,010 daemon height: 433,401 block cache size: 900,043
2016-10-08 14:46:51.253644500 INFO:BlockCache:prefilled 4,000 blocks to height 8,010 daemon height: 433,401 block cache size: 1,600,613
2016-10-08 14:46:52.195633500 INFO:BlockCache:prefilled 4,000 blocks to height 12,010 daemon height: 433,401 block cache size: 2,329,325
Under normal operation these prefill messages repeat fairly regularly.
Occasionally (depending on how big your FLUSH_SIZE environment
variable was set, and your hardware, this could be anything from every
5 minutes to every hour) you will get a flush to disk that begins with:
2016-10-08 06:34:20.841563500 INFO:DB:flushing to levelDB 828,190 txs and 3,067 blocks to height 243,982 tx count: 20,119,669
During the flush, which can take many minutes, you may see logs like
this:
2016-10-08 12:20:08.558750500 INFO:DB:address 1dice7W2AicHosf5EL3GFDUVga7TgtPFn hist moving to idx 3000
These are just informational messages about addresses that have very
large histories that are generated as those histories are being
written outt. After the flush has completed a few stats are printed
about cache hits, the number of writes and deletes, and the number of
writes that were elided by the cache::
2016-10-08 06:37:41.035139500 INFO:DB:flushed. Cache hits: 3,185,958/192,336 writes: 781,526 deletes: 465,236 elided: 3,185,958 sync: 0d 06h 57m 03s
After flush-to-disk you may see an aiohttp error; this is the daemon
timing out the connection while the disk flush was in progress. This
is harmless; I intend to fix this soon by yielding whilst flushing.
You may see one or two logs about ambiguous UTXOs or hash160s::
2016-10-08 07:24:34.068609500 INFO:DB:UTXO compressed key collision at height 252943 utxo 115cc1408e5321636675a8fcecd204661a6f27b4b7482b1b7c4402ca4b94b72f / 1
These are an informational message about artefact of the compression
scheme ElectrumX uses and are harmless. However, if you see more than
a handful of these, particularly close together, something is very
wrong and your DB is probably corrupt.

24
LICENSE

@ -0,0 +1,24 @@
Copyright (c) 2016, Neil Booth
All rights reserved.
The MIT License (MIT)
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

126
README.rst

@ -0,0 +1,126 @@
ElectrumX - Reimplementation of Electrum-server
===============================================
::
Licence: MIT Licence
Author: Neil Booth
Language: Python (>=3.5)
Motivation
==========
For privacy and other reasons, I have long wanted to run my own
Electrum server, but for reasons I cannot remember I struggled to set
it up or get it to work on my DragonFlyBSD system, and I lost interest
for over a year.
More recently I heard that Electrum server databases were around 35GB
in size when gzipped, and had sync times from Genesis of over a week
(and sufficiently painful that no one seems to have done one for a
long time) and got curious about improvements. After taking a look at
the existing server code I decided to try a different approach.
I prefer Python3 over Python2, and the fact that Electrum is stuck on
Python2 has been frustrating for a while. It's easier to change the
server to Python3 than the client.
It also seemed like a good way to learn about asyncio, which is a
wonderful and powerful feature of Python from 3.4 onwards.
Incidentally asyncio would also make a much better way to implement
the Electrum client.
Finally though no fan of most altcoins I wanted to write a codebase
that could easily be reused for those alts that are reasonably
compatible with Bitcoin. Such an abstraction is also useful for
testnets, of course.
Implementation
==============
ElectrumX does not currently do any pruning. With luck it may never
become necessary. So how does it achieve a much more compact database
than Electrum server, which throws away a lot of information? And
sync faster to boot?
All of the following likely play a part:
- more compact representation of UTXOs, the mp address index, and
history. Electrum server stores full transaction hash and height
for all UTXOs. In its pruned history it does the same. ElectrumX
just stores the transaction number in the linear history of
transactions, and it looks like that for at least 5 years that will
fit in a 4-byte integer. ElectrumX calculates the height from a
simple lookup in a linear array which is stored on disk. ElectrumX
also stores transaction hashes in a linear array on disk.
- storing static append-only metadata which is indexed by position on
disk rather than in levelDB. It would be nice to do this for histories
but I cannot think how they could be easily indexable on a filesystem.
- avoiding unnecessary or redundant computations
- more efficient memory usage - through more compact data structures and
and judicious use of memoryviews
- big caches (controlled via FLUSH_SIZE)
- asyncio and asynchronous prefetch of blocks. With luck ElectrumX
will have no need of threads or locking primitives
- because it prunes electrum-server needs to store undo information,
ElectrumX should does not need to store undo information for
blockchain reorganisations (note blockchain reorgs are not yet
implemented in ElectrumX)
- finally electrum-server maintains a patricia tree of UTXOs. My
understanding is this is for future features and not currently
required. It's unclear precisely how this will be used or what
could replace or duplicate its functionality in ElectrumX. Since
ElectrumX stores all necessary blockchain metadata some solution
should exist.
Future/TODO
===========
- handling blockchain reorgs
- handling client connections (heh!)
- investigating leveldb space / speed tradeoffs
- seeking out further efficiencies. ElectrumX is CPU bound; it would not
surprise me if there is a way to cut CPU load by 10-20% more. To squeeze
more out would probably require some things to move to C or C++.
Once I get round to writing the server part, I will add DoS
protections if necessary to defend against requests for large
histories. However with asyncio it would not surprise me if ElectrumX
could smoothly serve the whole history of the biggest Satoshi dice
address with minimal negative impact on other connections; we shall
have to see. If the requestor is running Electrum client I am
confident that it would collapse under the load far more quickly that
the server would; it is very inefficeint at handling large wallets
and histories.
Database Format
===============
The database and metadata formats of ElectrumX are very likely to
change in the future. If so old DBs would not be usable. However it
should be easy to write short Python script to do any necessary
conversions in-place without having to start afresh.
Miscellany
==========
As I've been researching where the time is going during block chain
indexing and how various cache sizes and hardware choices affect it,
I'd appreciate it if anyone trying to synchronize could tell me their::
- their O/S and filesystem
- their hardware (CPU name and speed, RAM, and disk kind)
- whether their daemon was on the same host or not
- whatever stats about sync height vs time they can provide (the
logs give it all in wall time)
- the network they synced
Neil Booth
kyuupichan@gmail.com
https://github.com/kyuupichan
1BWwXJH3q6PRsizBkSGm2Uw4Sz1urZ5sCj

240
lib/coins.py

@ -0,0 +1,240 @@
# See the file "COPYING" for information about the copyright
# and warranty status of this software.
import inspect
import sys
from lib.hash import Base58, hash160
from lib.script import ScriptPubKey
from lib.tx import Deserializer
class CoinError(Exception):
pass
class Coin(object):
'''Base class of coin hierarchy'''
# Not sure if these are coin-specific
HEADER_LEN = 80
DEFAULT_RPC_PORT = 8332
@staticmethod
def coins():
is_coin = lambda obj: (inspect.isclass(obj)
and issubclass(obj, Coin)
and obj != Coin)
pairs = inspect.getmembers(sys.modules[__name__], is_coin)
# Returned in the order they appear in this file
return [pair[1] for pair in pairs]
@classmethod
def lookup_coin_class(cls, name, net):
for coin in cls.coins():
if (coin.NAME.lower() == name.lower()
and coin.NET.lower() == net.lower()):
return coin
raise CoinError('unknown coin {} and network {} combination'
.format(name, net))
@staticmethod
def lookup_xverbytes(verbytes):
# Order means BTC testnet will override NMC testnet
for coin in Coin.coins():
if verbytes == coin.XPUB_VERBYTES:
return True, coin
if verbytes == coin.XPRV_VERBYTES:
return False, coin
raise CoinError("version bytes unrecognised")
@classmethod
def address_to_hash160(cls, addr):
'''Returns a hash160 given an address'''
result = Base58.decode_check(addr)
if len(result) != 21:
raise CoinError('invalid address: {}'.format(addr))
return result[1:]
@classmethod
def P2PKH_address_from_hash160(cls, hash_bytes):
'''Returns a P2PKH address given a public key'''
assert len(hash_bytes) == 20
payload = bytes([cls.P2PKH_VERBYTE]) + hash_bytes
return Base58.encode_check(payload)
@classmethod
def P2PKH_address_from_pubkey(cls, pubkey):
'''Returns a coin address given a public key'''
return cls.P2PKH_address_from_hash160(hash160(pubkey))
@classmethod
def P2SH_address_from_hash160(cls, pubkey_bytes):
'''Returns a coin address given a public key'''
assert len(hash_bytes) == 20
payload = bytes([cls.P2SH_VERBYTE]) + hash_bytes
return Base58.encode_check(payload)
@classmethod
def multisig_address(cls, m, pubkeys):
'''Returns the P2SH address for an M of N multisig transaction. Pass
the N pubkeys of which M are needed to sign it. If generating
an address for a wallet, it is the caller's responsibility to
sort them to ensure order does not matter for, e.g., wallet
recovery.'''
script = cls.pay_to_multisig_script(m, pubkeys)
payload = bytes([cls.P2SH_VERBYTE]) + hash160(pubkey_bytes)
return Base58.encode_check(payload)
@classmethod
def pay_to_multisig_script(cls, m, pubkeys):
'''Returns a P2SH multisig script for an M of N multisig
transaction.'''
return ScriptPubKey.multisig_script(m, pubkeys)
@classmethod
def pay_to_pubkey_script(cls, pubkey):
'''Returns a pubkey script that pays to pubkey. The input is the
raw pubkey bytes (length 33 or 65).'''
return ScriptPubKey.P2PK_script(pubkey)
@classmethod
def pay_to_address_script(cls, address):
'''Returns a pubkey script that pays to pubkey hash. Input is the
address (either P2PKH or P2SH) in base58 form.'''
raw = Base58.decode_check(address)
# Require version byte plus hash160.
verbyte = -1
if len(raw) == 21:
verbyte, hash_bytes = raw[0], raw[1:]
if verbyte == cls.P2PKH_VERYBYTE:
return ScriptPubKey.P2PKH_script(hash_bytes)
if verbyte == cls.P2SH_VERBYTE:
return ScriptPubKey.P2SH_script(hash_bytes)
raise CoinError('invalid address: {}'.format(address))
@classmethod
def prvkey_WIF(privkey_bytes, compressed):
"The private key encoded in Wallet Import Format"
payload = bytearray([cls.WIF_BYTE]) + privkey_bytes
if compressed:
payload.append(0x01)
return Base58.encode_check(payload)
@classmethod
def read_block(cls, block):
assert isinstance(block, memoryview)
d = Deserializer(block[cls.HEADER_LEN:])
return d.read_block()
class Bitcoin(Coin):
NAME = "Bitcoin"
SHORTNAME = "BTC"
NET = "mainnet"
XPUB_VERBYTES = bytes.fromhex("0488b21e")
XPRV_VERBYTES = bytes.fromhex("0488ade4")
P2PKH_VERBYTE = 0x00
P2SH_VERBYTE = 0x05
WIF_BYTE = 0x80
GENESIS_HASH=(b'000000000019d6689c085ae165831e93'
b'4ff763ae46a2a6c172b3f1b60a8ce26f')
class BitcoinTestnet(Coin):
NAME = "Bitcoin"
SHORTNAME = "XTN"
NET = "testnet"
XPUB_VERBYTES = bytes.fromhex("043587cf")
XPRV_VERBYTES = bytes.fromhex("04358394")
P2PKH_VERBYTE = 0x6f
P2SH_VERBYTE = 0xc4
WIF_BYTE = 0xef
# Source: pycoin and others
class Litecoin(Coin):
NAME = "Litecoin"
SHORTNAME = "LTC"
NET = "mainnet"
XPUB_VERBYTES = bytes.fromhex("019da462")
XPRV_VERBYTES = bytes.fromhex("019d9cfe")
P2PKH_VERBYTE = 0x30
P2SH_VERBYTE = 0x05
WIF_BYTE = 0xb0
class LitecoinTestnet(Coin):
NAME = "Litecoin"
SHORTNAME = "XLT"
NET = "testnet"
XPUB_VERBYTES = bytes.fromhex("0436f6e1")
XPRV_VERBYTES = bytes.fromhex("0436ef7d")
P2PKH_VERBYTE = 0x6f
P2SH_VERBYTE = 0xc4
WIF_BYTE = 0xef
# Source: namecoin.org
class Namecoin(Coin):
NAME = "Namecoin"
SHORTNAME = "NMC"
NET = "mainnet"
XPUB_VERBYTES = bytes.fromhex("d7dd6370")
XPRV_VERBYTES = bytes.fromhex("d7dc6e31")
P2PKH_VERBYTE = 0x34
P2SH_VERBYTE = 0x0d
WIF_BYTE = 0xe4
class NamecoinTestnet(Coin):
NAME = "Namecoin"
SHORTNAME = "XNM"
NET = "testnet"
XPUB_VERBYTES = bytes.fromhex("043587cf")
XPRV_VERBYTES = bytes.fromhex("04358394")
P2PKH_VERBYTE = 0x6f
P2SH_VERBYTE = 0xc4
WIF_BYTE = 0xef
# For DOGE there is disagreement across sites like bip32.org and
# pycoin. Taken from bip32.org and bitmerchant on github
class Dogecoin(Coin):
NAME = "Dogecoin"
SHORTNAME = "DOGE"
NET = "mainnet"
XPUB_VERBYTES = bytes.fromhex("02facafd")
XPRV_VERBYTES = bytes.fromhex("02fac398")
P2PKH_VERBYTE = 0x1e
P2SH_VERBYTE = 0x16
WIF_BYTE = 0x9e
class DogecoinTestnet(Coin):
NAME = "Dogecoin"
SHORTNAME = "XDT"
NET = "testnet"
XPUB_VERBYTES = bytes.fromhex("0432a9a8")
XPRV_VERBYTES = bytes.fromhex("0432a243")
P2PKH_VERBYTE = 0x71
P2SH_VERBYTE = 0xc4
WIF_BYTE = 0xf1
# Source: pycoin
class Dash(Coin):
NAME = "Dash"
SHORTNAME = "DASH"
NET = "mainnet"
XPUB_VERBYTES = bytes.fromhex("02fe52cc")
XPRV_VERBYTES = bytes.fromhex("02fe52f8")
P2PKH_VERBYTE = 0x4c
P2SH_VERBYTE = 0x10
WIF_BYTE = 0xcc
class DashTestnet(Coin):
NAME = "Dogecoin"
SHORTNAME = "tDASH"
NET = "testnet"
XPUB_VERBYTES = bytes.fromhex("3a805837")
XPRV_VERBYTES = bytes.fromhex("3a8061a0")
P2PKH_VERBYTE = 0x8b
P2SH_VERBYTE = 0x13
WIF_BYTE = 0xef

45
lib/enum.py

@ -0,0 +1,45 @@
# enum-like type
# From the Python Cookbook from http://code.activestate.com/recipes/67107/
class EnumException(Exception):
pass
class Enumeration:
def __init__(self, name, enumList):
self.__doc__ = name
lookup = {}
reverseLookup = {}
i = 0
uniqueNames = set()
uniqueValues = set()
for x in enumList:
if isinstance(x, tuple):
x, i = x
if not isinstance(x, str):
raise EnumException("enum name {} not a string".format(x))
if not isinstance(i, int):
raise EnumException("enum value {} not an integer".format(i))
if x in uniqueNames:
raise EnumException("enum name {} not unique".format(x))
if i in uniqueValues:
raise EnumException("enum value {} not unique".format(x))
uniqueNames.add(x)
uniqueValues.add(i)
lookup[x] = i
reverseLookup[i] = x
i = i + 1
self.lookup = lookup
self.reverseLookup = reverseLookup
def __getattr__(self, attr):
result = self.lookup.get(attr)
if result is None:
raise AttributeError('enumeration has no member {}'.format(attr))
return result
def whatis(self, value):
return self.reverseLookup[value]

115
lib/hash.py

@ -0,0 +1,115 @@
# See the file "COPYING" for information about the copyright
# and warranty status of this software.
import hashlib
import hmac
from lib.util import bytes_to_int, int_to_bytes
def sha256(x):
assert isinstance(x, (bytes, bytearray, memoryview))
return hashlib.sha256(x).digest()
def ripemd160(x):
assert isinstance(x, (bytes, bytearray, memoryview))
h = hashlib.new('ripemd160')
h.update(x)
return h.digest()
def double_sha256(x):
return sha256(sha256(x))
def hmac_sha512(key, msg):
return hmac.new(key, msg, hashlib.sha512).digest()
def hash160(x):
return ripemd160(sha256(x))
class InvalidBase58String(Exception):
pass
class InvalidBase58CheckSum(Exception):
pass
class Base58(object):
chars = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz'
assert len(chars) == 58
cmap = {c: n for n, c in enumerate(chars)}
@staticmethod
def char_value(c):
val = Base58.cmap.get(c)
if val is None:
raise InvalidBase58String
return val
@staticmethod
def decode(txt):
"""Decodes txt into a big-endian bytearray."""
if not isinstance(txt, str):
raise InvalidBase58String("a string is required")
if not txt:
raise InvalidBase58String("string cannot be empty")
value = 0
for c in txt:
value = value * 58 + Base58.char_value(c)
result = int_to_bytes(value)
# Prepend leading zero bytes if necessary
count = 0
for c in txt:
if c != '1':
break
count += 1
if count:
result = bytes(count) + result
return result
@staticmethod
def encode(be_bytes):
"""Converts a big-endian bytearray into a base58 string."""
value = bytes_to_int(be_bytes)
txt = ''
while value:
value, mod = divmod(value, 58)
txt += Base58.chars[mod]
for byte in be_bytes:
if byte != 0:
break
txt += '1'
return txt[::-1]
@staticmethod
def decode_check(txt):
'''Decodes a Base58Check-encoded string to a payload. The version
prefixes it.'''
be_bytes = Base58.decode(txt)
result, check = be_bytes[:-4], be_bytes[-4:]
if check != double_sha256(result)[:4]:
raise InvalidBase58CheckSum
return result
@staticmethod
def encode_check(payload):
"""Encodes a payload bytearray (which includes the version byte(s))
into a Base58Check string."""
assert isinstance(payload, (bytes, bytearray))
be_bytes = payload + double_sha256(payload)[:4]
return Base58.encode(be_bytes)

306
lib/script.py

@ -0,0 +1,306 @@
# See the file "COPYING" for information about the copyright
# and warranty status of this software.
from binascii import hexlify
import struct
from lib.enum import Enumeration
from lib.hash import hash160
from lib.util import cachedproperty
class ScriptError(Exception):
pass
OpCodes = Enumeration("Opcodes", [
("OP_0", 0), ("OP_PUSHDATA1", 76),
"OP_PUSHDATA2", "OP_PUSHDATA4", "OP_1NEGATE",
"OP_RESERVED",
"OP_1", "OP_2", "OP_3", "OP_4", "OP_5", "OP_6", "OP_7", "OP_8",
"OP_9", "OP_10", "OP_11", "OP_12", "OP_13", "OP_14", "OP_15", "OP_16",
"OP_NOP", "OP_VER", "OP_IF", "OP_NOTIF", "OP_VERIF", "OP_VERNOTIF",
"OP_ELSE", "OP_ENDIF", "OP_VERIFY", "OP_RETURN",
"OP_TOALTSTACK", "OP_FROMALTSTACK", "OP_2DROP", "OP_2DUP", "OP_3DUP",
"OP_2OVER", "OP_2ROT", "OP_2SWAP", "OP_IFDUP", "OP_DEPTH", "OP_DROP",
"OP_DUP", "OP_NIP", "OP_OVER", "OP_PICK", "OP_ROLL", "OP_ROT",
"OP_SWAP", "OP_TUCK",
"OP_CAT", "OP_SUBSTR", "OP_LEFT", "OP_RIGHT", "OP_SIZE",
"OP_INVERT", "OP_AND", "OP_OR", "OP_XOR", "OP_EQUAL", "OP_EQUALVERIFY",
"OP_RESERVED1", "OP_RESERVED2",
"OP_1ADD", "OP_1SUB", "OP_2MUL", "OP_2DIV", "OP_NEGATE", "OP_ABS",
"OP_NOT", "OP_0NOTEQUAL", "OP_ADD", "OP_SUB", "OP_MUL", "OP_DIV", "OP_MOD",
"OP_LSHIFT", "OP_RSHIFT", "OP_BOOLAND", "OP_BOOLOR", "OP_NUMEQUAL",
"OP_NUMEQUALVERIFY", "OP_NUMNOTEQUAL", "OP_LESSTHAN", "OP_GREATERTHAN",
"OP_LESSTHANOREQUAL", "OP_GREATERTHANOREQUAL", "OP_MIN", "OP_MAX",
"OP_WITHIN",
"OP_RIPEMD160", "OP_SHA1", "OP_SHA256", "OP_HASH160", "OP_HASH256",
"OP_CODESEPARATOR", "OP_CHECKSIG", "OP_CHECKSIGVERIFY", "OP_CHECKMULTISIG",
"OP_CHECKMULTISIGVERIFY",
"OP_NOP1",
"OP_CHECKLOCKTIMEVERIFY", "OP_CHECKSEQUENCEVERIFY"
])
# Paranoia to make it hard to create bad scripts
assert OpCodes.OP_DUP == 0x76
assert OpCodes.OP_HASH160 == 0xa9
assert OpCodes.OP_EQUAL == 0x87
assert OpCodes.OP_EQUALVERIFY == 0x88
assert OpCodes.OP_CHECKSIG == 0xac
assert OpCodes.OP_CHECKMULTISIG == 0xae
class ScriptSig(object):
'''A script from a tx input, typically provides one or more signatures.'''
SIG_ADDRESS, SIG_MULTI, SIG_PUBKEY, SIG_UNKNOWN = range(4)
def __init__(self, script, coin, kind, sigs, pubkeys):
self.script = script
self.coin = coin
self.kind = kind
self.sigs = sigs
self.pubkeys = pubkeys
@cachedproperty
def address(self):
if self.kind == SIG_ADDRESS:
return self.coin.address_from_pubkey(self.pubkeys[0])
if self.kind == SIG_MULTI:
return self.coin.multsig_address(self.pubkeys)
return 'Unknown'
@classmethod
def from_script(cls, script, coin):
'''Returns an instance of this class. Uncrecognised scripts return
an object of kind SIG_UNKNOWN.'''
try:
return cls.parse_script(script, coin)
except ScriptError:
return cls(script, coin, SIG_UNKNOWN, [], [])
@classmethod
def parse_script(cls, script, coin):
'''Returns an instance of this class. Raises on unrecognised
scripts.'''
ops, datas = Script.get_ops(script)
# Address, PubKey and P2SH redeems only push data
if not ops or not Script.match_ops(ops, [-1] * len(ops)):
raise ScriptError('unknown scriptsig pattern')
# Assume double data pushes are address redeems, single data
# pushes are pubkey redeems
if len(ops) == 2: # Signature, pubkey
return cls(script, coin, SIG_ADDRESS, [datas[0]], [datas[1]])
if len(ops) == 1: # Pubkey
return cls(script, coin, SIG_PUBKEY, [datas[0]], [])
# Presumably it is P2SH (though conceivably the above could be
# too; cannot be sure without the send-to script). We only
# handle CHECKMULTISIG P2SH, which because of a bitcoin core
# bug always start with an unused OP_0.
if ops[0] != OpCodes.OP_0:
raise ScriptError('unknown scriptsig pattern; expected OP_0')
# OP_0, Sig1, ..., SigM, pk_script
m = len(ops) - 2
pk_script = datas[-1]
pk_ops, pk_datas = Script.get_ops(script)
# OP_2 pubkey1 pubkey2 pubkey3 OP_3 OP_CHECKMULTISIG
n = len(pk_ops) - 3
pattern = ([OpCodes.OP_1 + m - 1] + [-1] * n
+ [OpCodes.OP_1 + n - 1, OpCodes.OP_CHECKMULTISIG])
if m <= n and Script.match_ops(pk_ops, pattern):
return cls(script, coin, SIG_MULTI, datas[1:-1], pk_datas[1:-2])
raise ScriptError('unknown multisig P2SH pattern')
class ScriptPubKey(object):
'''A script from a tx output that gives conditions necessary for
spending.'''
TO_ADDRESS, TO_P2SH, TO_PUBKEY, TO_UNKNOWN = range(4)
TO_ADDRESS_OPS = [OpCodes.OP_DUP, OpCodes.OP_HASH160, -1,
OpCodes.OP_EQUALVERIFY, OpCodes.OP_CHECKSIG]
TO_P2SH_OPS = [OpCodes.OP_HASH160, -1, OpCodes.OP_EQUAL]
TO_PUBKEY_OPS = [-1, OpCodes.OP_CHECKSIG]
def __init__(self, script, coin, kind, hash160, pubkey=None):
self.script = script
self.coin = coin
self.kind = kind
self.hash160 = hash160
if pubkey:
self.pubkey = pubkey
@cachedproperty
def address(self):
if self.kind == ScriptPubKey.TO_P2SH:
return self.coin.P2SH_address_from_hash160(self.hash160)
if self.hash160:
return self.coin.P2PKH_address_from_hash160(self.hash160)
return ''
@classmethod
def from_script(cls, script, coin):
'''Returns an instance of this class. Uncrecognised scripts return
an object of kind TO_UNKNOWN.'''
try:
return cls.parse_script(script, coin)
except ScriptError:
return cls(script, coin, cls.TO_UNKNOWN, None)
@classmethod
def parse_script(cls, script, coin):
'''Returns an instance of this class. Raises on unrecognised
scripts.'''
ops, datas = Script.get_ops(script)
if Script.match_ops(ops, cls.TO_ADDRESS_OPS):
return cls(script, coin, cls.TO_ADDRESS, datas[2])
if Script.match_ops(ops, cls.TO_P2SH_OPS):
return cls(script, coin, cls.TO_P2SH, datas[1])
if Script.match_ops(ops, cls.TO_PUBKEY_OPS):
pubkey = datas[0]
return cls(script, coin, cls.TO_PUBKEY, hash160(pubkey), pubkey)
raise ScriptError('unknown script pubkey pattern')
@classmethod
def P2SH_script(cls, hash160):
return (bytes([OpCodes.OP_HASH160])
+ Script.push_data(hash160)
+ bytes([OpCodes.OP_EQUAL]))
@classmethod
def P2PKH_script(cls, hash160):
return (bytes([OpCodes.OP_DUP, OpCodes.OP_HASH160])
+ Script.push_data(hash160)
+ bytes([OpCodes.OP_EQUALVERIFY, OpCodes.OP_CHECKSIG]))
@classmethod
def validate_pubkey(cls, pubkey, req_compressed=False):
if isinstance(pubkey, (bytes, bytearray)):
if len(pubkey) == 33 and pubkey[0] in (2, 3):
return # Compressed
if len(pubkey) == 65 and pubkey[0] == 4:
if not req_compressed:
return
raise PubKeyError('uncompressed pubkeys are invalid')
raise PubKeyError('invalid pubkey {}'.format(pubkey))
@classmethod
def pubkey_script(cls, pubkey):
cls.validate_pubkey(pubkey)
return Script.push_data(pubkey) + bytes([OpCodes.OP_CHECKSIG])
@classmethod
def multisig_script(cls, m, pubkeys):
'''Returns the script for a pay-to-multisig transaction.'''
n = len(pubkeys)
if not 1 <= m <= n <= 15:
raise ScriptError('{:d} of {:d} multisig script not possible'
.format(m, n))
for pubkey in pubkeys:
cls.validate_pubkey(pubkey, req_compressed=True)
# See https://bitcoin.org/en/developer-guide
# 2 of 3 is: OP_2 pubkey1 pubkey2 pubkey3 OP_3 OP_CHECKMULTISIG
return (bytes([OP_1 + m - 1])
+ b''.join(cls.push_data(pubkey) for pubkey in pubkeys)
+ bytes([OP_1 + n - 1, OP_CHECK_MULTISIG]))
class Script(object):
@classmethod
def get_ops(cls, script):
opcodes, datas = [], []
# The unpacks or script[n] below throw on truncated scripts
try:
n = 0
while n < len(script):
opcode, data = script[n], None
n += 1
if opcode <= OpCodes.OP_PUSHDATA4:
# Raw bytes follow
if opcode < OpCodes.OP_PUSHDATA1:
dlen = opcode
elif opcode == OpCodes.OP_PUSHDATA1:
dlen = script[n]
n += 1
elif opcode == OpCodes.OP_PUSHDATA2:
(dlen,) = struct.unpack('<H', script[n: n + 2])
n += 2
else:
(dlen,) = struct.unpack('<I', script[n: n + 4])
n += 4
data = script[n:n + dlen]
if len(data) != dlen:
raise ScriptError('truncated script')
n += dlen
opcodes.append(opcode)
datas.append(data)
except:
# Truncated script; e.g. tx_hash
# ebc9fa1196a59e192352d76c0f6e73167046b9d37b8302b6bb6968dfd279b767
raise ScriptError('truncated script')
return opcodes, datas
@classmethod
def match_ops(cls, ops, pattern):
if len(ops) != len(pattern):
return False
for op, pop in zip(ops, pattern):
if pop != op:
# -1 Indicates data push expected
if pop == -1 and OpCodes.OP_0 <= op <= OpCodes.OP_PUSHDATA4:
continue
return False
return True
@classmethod
def push_data(cls, data):
'''Returns the opcodes to push the data on the stack.'''
assert isinstance(data, (bytes, bytearray))
n = len(data)
if n < OpCodes.OP_PUSHDATA1:
return bytes([n]) + data
if n < 256:
return bytes([OpCodes.OP_PUSHDATA1, n]) + data
if n < 65536:
return bytes([OpCodes.OP_PUSHDATA2]) + struct.pack('<H', n) + data
return bytes([OpCodes.OP_PUSHDATA4]) + struct.pack('<I', n) + data
@classmethod
def opcode_name(cls, opcode):
if OpCodes.OP_0 < opcode < OpCodes.OP_PUSHDATA1:
return 'OP_{:d}'.format(opcode)
try:
return OpCodes.whatis(opcode)
except KeyError:
return 'OP_UNKNOWN:{:d}'.format(opcode)
@classmethod
def dump(cls, script):
opcodes, datas = cls.get_ops(script)
for opcode, data in zip(opcodes, datas):
name = cls.opcode_name(opcode)
if data is None:
print(name)
else:
print('{} {} ({:d} bytes)'
.format(name, hexlify(data).decode('ascii'), len(data)))

140
lib/tx.py

@ -0,0 +1,140 @@
# See the file "COPYING" for information about the copyright
# and warranty status of this software.
from collections import namedtuple
import binascii
import struct
from lib.util import cachedproperty
from lib.hash import double_sha256
class Tx(namedtuple("Tx", "version inputs outputs locktime")):
@cachedproperty
def is_coinbase(self):
return self.inputs[0].is_coinbase
OutPoint = namedtuple("OutPoint", "hash n")
# prevout is an OutPoint object
class TxInput(namedtuple("TxInput", "prevout script sequence")):
ZERO = bytes(32)
MINUS_1 = 4294967295
@cachedproperty
def is_coinbase(self):
return self.prevout == (TxInput.ZERO, TxInput.MINUS_1)
@cachedproperty
def script_sig_info(self):
# No meaning for coinbases
if self.is_coinbase:
return None
return Script.parse_script_sig(self.script)
def __repr__(self):
script = binascii.hexlify(self.script).decode("ascii")
prev_hash = binascii.hexlify(self.prevout.hash).decode("ascii")
return ("Input(prevout=({}, {:d}), script={}, sequence={:d})"
.format(prev_hash, self.prevout.n, script, self.sequence))
class TxOutput(namedtuple("TxOutput", "value pk_script")):
@cachedproperty
def pay_to(self):
return Script.parse_pk_script(self.pk_script)
class Deserializer(object):
def __init__(self, binary):
assert isinstance(binary, (bytes, memoryview))
self.binary = binary
self.cursor = 0
def read_tx(self):
version = self.read_le_int32()
inputs = self.read_inputs()
outputs = self.read_outputs()
locktime = self.read_le_uint32()
return Tx(version, inputs, outputs, locktime)
def read_block(self):
tx_hashes = []
txs = []
tx_count = self.read_varint()
for n in range(tx_count):
start = self.cursor
tx = self.read_tx()
# Note this hash needs to be reversed for human display
# For efficiency we store it in the natural serialized order
tx_hash = double_sha256(self.binary[start:self.cursor])
tx_hashes.append(tx_hash)
txs.append(tx)
return tx_hashes, txs
def read_inputs(self):
n = self.read_varint()
return [self.read_input() for i in range(n)]
def read_input(self):
prevout = self.read_outpoint()
script = self.read_varbytes()
sequence = self.read_le_uint32()
return TxInput(prevout, script, sequence)
def read_outpoint(self):
hash = self.read_nbytes(32)
n = self.read_le_uint32()
return OutPoint(hash, n)
def read_outputs(self):
n = self.read_varint()
return [self.read_output() for i in range(n)]
def read_output(self):
value = self.read_le_int64()
pk_script = self.read_varbytes()
return TxOutput(value, pk_script)
def read_nbytes(self, n):
result = self.binary[self.cursor:self.cursor + n]
self.cursor += n
return result
def read_varbytes(self):
return self.read_nbytes(self.read_varint())
def read_varint(self):
b = self.binary[self.cursor]
self.cursor += 1
if b < 253:
return b
if b == 253:
return self.read_le_uint16()
if b == 254:
return self.read_le_uint32()
return self.read_le_uint64()
def read_le_int32(self):
return self.read_format('<i')
def read_le_int64(self):
return self.read_format('<q')
def read_le_uint16(self):
return self.read_format('<H')
def read_le_uint32(self):
return self.read_format('<I')
def read_le_uint64(self):
return self.read_format('<Q')
def read_format(self, fmt):
(result,) = struct.unpack_from(fmt, self.binary, self.cursor)
self.cursor += struct.calcsize(fmt)
return result

66
lib/util.py

@ -0,0 +1,66 @@
# See the file "COPYING" for information about the copyright
# and warranty status of this software.
import sys
class Log(object):
'''Logging base class'''
VERBOSE = True
def diagnostic_name(self):
return self.__class__.__name__
def log(self, *msgs):
if Log.VERBOSE:
print('[{}]: '.format(self.diagnostic_name()), *msgs,
file=sys.stdout, flush=True)
def log_error(self, *msg):
print('[{}]: ERROR: {}'.format(self.diagnostic_name()), *msgs,
file=sys.stderr, flush=True)
# Method decorator. To be used for calculations that will always
# deliver the same result. The method cannot take any arguments
# and should be accessed as an attribute.
class cachedproperty(object):
def __init__(self, f):
self.f = f
def __get__(self, obj, type):
if obj is None:
return self
value = self.f(obj)
obj.__dict__[self.f.__name__] = value
return value
def __set__(self, obj, value):
raise AttributeError('cannot set {} on {}'
.format(self.f.__name__, obj))
def chunks(items, size):
for i in range(0, len(items), size):
yield items[i: i + size]
def bytes_to_int(be_bytes):
'''Interprets a big-endian sequence of bytes as an integer'''
assert isinstance(be_bytes, (bytes, bytearray))
value = 0
for byte in be_bytes:
value = value * 256 + byte
return value
def int_to_bytes(value):
'''Converts an integer to a big-endian sequence of bytes'''
mods = []
while value:
value, mod = divmod(value, 256)
mods.append(mod)
return bytes(reversed(mods))

15
samples/scripts/NOTES

@ -0,0 +1,15 @@
The following environment variables are required:
COIN - see lib/coins.py, must be a coin NAME
NETWORK - see lib/coins.py, must be a coin NET
DB_DIRECTORY - path to the database directory (if relative, to run script)
USERNAME - the username the server will run as
SERVER_MAIN - path to the server_main.py script (if relative, to run script)
In addition either RPC_URL must be given as the full RPC URL for
connecting to the daemon, or you must specify RPC_HOST, RPC_USER,
RPC_PASSWORD and optionally RPC_PORT (it defaults appropriately for
the coin and network otherwise).
The other environment variables are all optional and will adopt sensible defaults if not
specified.

1
samples/scripts/env/COIN

@ -0,0 +1 @@
Bitcoin

1
samples/scripts/env/DB_DIRECTORY

@ -0,0 +1 @@
/path/to/db/directory

1
samples/scripts/env/FLUSH_SIZE

@ -0,0 +1 @@
4000000

1
samples/scripts/env/NETWORK

@ -0,0 +1 @@
mainnet

1
samples/scripts/env/RPC_HOST

@ -0,0 +1 @@
192.168.0.1

1
samples/scripts/env/RPC_PASSWORD

@ -0,0 +1 @@
your_daemon's_rpc_password

1
samples/scripts/env/RPC_PORT

@ -0,0 +1 @@
8332

1
samples/scripts/env/RPC_USERNAME

@ -0,0 +1 @@
your_daemon's_rpc_username

1
samples/scripts/env/SERVER_MAIN

@ -0,0 +1 @@
/path/to/repos/electrumx/server_main.py

1
samples/scripts/env/USERNAME

@ -0,0 +1 @@
electrumx

2
samples/scripts/log/run

@ -0,0 +1,2 @@
#!/bin/sh
exec multilog t s500000 n10 /path/to/log/dir

3
samples/scripts/run

@ -0,0 +1,3 @@
#!/bin/sh
echo "Launching ElectrumX server..."
exec 2>&1 envdir ./env /bin/sh -c 'envuidgid $USERNAME python3 $SERVER_MAIN'

0
server/__init__.py

470
server/db.py

@ -0,0 +1,470 @@
# See the file "COPYING" for information about the copyright
# and warranty status of this software.
import array
import itertools
import os
import struct
import time
from binascii import hexlify, unhexlify
from bisect import bisect_right
from collections import defaultdict, namedtuple
from functools import partial
import logging
import plyvel
from lib.coins import Bitcoin
from lib.script import ScriptPubKey
ADDR_TX_HASH_LEN=6
UTXO_TX_HASH_LEN=4
HIST_ENTRY_LEN=256*4 # Admits 65536 * HIST_ENTRY_LEN/4 entries
UTXO = namedtuple("UTXO", "tx_num tx_pos tx_hash height value")
def to_4_bytes(value):
return struct.pack('<I', value)
def from_4_bytes(b):
return struct.unpack('<I', b)[0]
class DB(object):
HEIGHT_KEY = b'height'
TIP_KEY = b'tip'
GENESIS_KEY = b'genesis'
TX_COUNT_KEY = b'tx_count'
WALL_TIME_KEY = b'wall_time'
class Error(Exception):
pass
def __init__(self, env):
self.logger = logging.getLogger('DB')
self.logger.setLevel(logging.INFO)
self.coin = env.coin
self.flush_size = env.flush_size
self.logger.info('using flush size of {:,d} entries'
.format(self.flush_size))
self.tx_counts = array.array('I')
self.tx_hash_file_size = 4*1024*1024
# Unflushed items. Headers and tx_hashes have one entry per block
self.headers = []
self.tx_hashes = []
self.history = defaultdict(list)
self.writes_avoided = 0
self.read_cache_hits = 0
self.write_cache_hits = 0
self.last_writes = 0
self.last_time = time.time()
# Things put in a batch are not visible until the batch is written,
# so use a cache.
# Semantics: a key/value pair in this dictionary represents the
# in-memory state of the DB. Anything in this dictionary will be
# written at the next flush.
self.write_cache = {}
# Read cache: a key/value pair in this dictionary represents
# something read from the DB; it is on-disk as of the prior
# flush. If a key is in write_cache that value is more
# recent. Any key in write_cache and not in read_cache has
# never hit the disk.
self.read_cache = {}
db_name = '{}-{}'.format(self.coin.NAME, self.coin.NET)
try:
self.db = self.open_db(db_name, False)
except:
self.db = self.open_db(db_name, True)
self.headers_file = self.open_file('headers', True)
self.txcount_file = self.open_file('txcount', True)
self.init_db()
self.logger.info('created new database {}'.format(db_name))
else:
self.logger.info('successfully opened database {}'.format(db_name))
self.headers_file = self.open_file('headers')
self.txcount_file = self.open_file('txcount')
self.read_db()
# Note that DB_HEIGHT is the height of the next block to be written.
# So an empty DB has a DB_HEIGHT of 0 not -1.
self.tx_count = self.db_tx_count
self.height = self.db_height - 1
self.tx_counts.fromfile(self.txcount_file, self.db_height)
if self.tx_count == 0:
self.flush()
def open_db(self, db_name, create):
return plyvel.DB(db_name, create_if_missing=create,
error_if_exists=create,
compression=None)
# lru_cache_size=256*1024*1024)
def init_db(self):
self.db_height = 0
self.db_tx_count = 0
self.wall_time = 0
self.tip = self.coin.GENESIS_HASH
self.put(self.GENESIS_KEY, unhexlify(self.tip))
def read_db(self):
genesis_hash = hexlify(self.get(self.GENESIS_KEY))
if genesis_hash != self.coin.GENESIS_HASH:
raise self.Error('DB genesis hash {} does not match coin {}'
.format(genesis_hash, self.coin.GENESIS_HASH))
self.db_height = from_4_bytes(self.get(self.HEIGHT_KEY))
self.db_tx_count = from_4_bytes(self.get(self.TX_COUNT_KEY))
self.wall_time = from_4_bytes(self.get(self.WALL_TIME_KEY))
self.tip = hexlify(self.get(self.TIP_KEY))
self.logger.info('{}/{} height: {:,d} tx count: {:,d} sync time: {}'
.format(self.coin.NAME, self.coin.NET,
self.db_height - 1, self.db_tx_count,
self.formatted_wall_time()))
def formatted_wall_time(self):
wall_time = int(self.wall_time)
return '{:d}d {:02d}h {:02d}m {:02d}s'.format(
wall_time // 86400, (wall_time % 86400) // 3600,
(wall_time % 3600) // 60, wall_time % 60)
def get(self, key):
# Get a key from write_cache, then read_cache, then the DB
value = self.write_cache.get(key)
if not value:
value = self.read_cache.get(key)
if not value:
value = self.db.get(key)
self.read_cache[key] = value
else:
self.read_cache_hits += 1
else:
self.write_cache_hits += 1
return value
def put(self, key, value):
assert(bool(value))
self.write_cache[key] = value
def delete(self, key):
# Deleting an on-disk key requires a later physical delete
# If it's not on-disk we can just drop it entirely
if self.read_cache.get(key) is None:
self.writes_avoided += 1
self.write_cache.pop(key, None)
else:
self.write_cache[key] = None
def put_state(self):
now = time.time()
self.wall_time += now - self.last_time
self.last_time = now
self.db_tx_count = self.tx_count
self.db_height = self.height + 1
self.put(self.HEIGHT_KEY, to_4_bytes(self.db_height))
self.put(self.TX_COUNT_KEY, to_4_bytes(self.db_tx_count))
self.put(self.TIP_KEY, unhexlify(self.tip))
self.put(self.WALL_TIME_KEY, to_4_bytes(int(self.wall_time)))
def flush(self):
# Write out the files to the FS before flushing to the DB. If
# the DB transaction fails, the files being too long doesn't
# matter. But if writing the files fails we do not want to
# have updated the DB. This disk flush is fast.
self.write_headers()
self.write_tx_counts()
self.write_tx_hashes()
tx_diff = self.tx_count - self.db_tx_count
height_diff = self.height + 1 - self.db_height
self.logger.info('flushing to levelDB {:,d} txs and {:,d} blocks '
'to height {:,d} tx count: {:,d}'
.format(tx_diff, height_diff, self.height,
self.tx_count))
# This LevelDB flush is slow
deletes = 0
writes = 0
with self.db.write_batch(transaction=True) as batch:
# Flush the state, then the cache, then the history
self.put_state()
for key, value in self.write_cache.items():
if value is None:
batch.delete(key)
deletes += 1
else:
batch.put(key, value)
writes += 1
self.flush_history()
self.logger.info('flushed. Cache hits: {:,d}/{:,d} writes: {:,d} '
'deletes: {:,d} elided: {:,d} sync: {}'
.format(self.write_cache_hits,
self.read_cache_hits, writes, deletes,
self.writes_avoided,
self.formatted_wall_time()))
# Note this preserves semantics and hopefully saves time
self.read_cache = self.write_cache
self.write_cache = {}
self.writes_avoided = 0
self.read_cache_hits = 0
self.write_cache_hits = 0
self.last_writes = writes
def flush_history(self):
# Drop any None entry
self.history.pop(None, None)
for hash160, hist in self.history.items():
prefix = b'H' + hash160
for key, v in self.db.iterator(reverse=True, prefix=prefix,
fill_cache=False):
assert len(key) == 23
v += array.array('I', hist).tobytes()
break
else:
key = prefix + bytes(2)
v = array.array('I', hist).tobytes()
# db.put doesn't accept a memoryview!
self.db.put(key, v[:HIST_ENTRY_LEN])
if len(v) > HIST_ENTRY_LEN:
# must be big-endian
(idx, ) = struct.unpack('>H', key[-2:])
for n in range(HIST_ENTRY_LEN, len(v), HIST_ENTRY_LEN):
idx += 1
key = prefix + struct.pack('>H', idx)
if idx % 500 == 0:
addr = self.coin.P2PKH_address_from_hash160(hash160)
self.logger.info('address {} hist moving to idx {:d}'
.format(addr, idx))
self.db.put(key, v[n:n + HIST_ENTRY_LEN])
self.history = defaultdict(list)
def get_hash160(self, tx_hash, idx, delete=True):
key = b'h' + tx_hash[:ADDR_TX_HASH_LEN] + struct.pack('<H', idx)
data = self.get(key)
if data is None:
return None
if len(data) == 24:
if delete:
self.delete(key)
return data[:20]
# This should almost never happen
assert len(data) % 24 == 0
self.logger.info('hash160 compressed key collision {}'
.format(key.hex()))
for n in range(0, len(data), 24):
(tx_num, ) = struct.unpack('<I', data[n+20:n+24])
my_hash, height = self.get_tx_hash(tx_num)
if my_hash == tx_hash:
if delete:
self.put(key, data[:n] + data[n + 24:])
return data[n:n+20]
else:
raise Exception('could not resolve hash160 collision')
def spend_utxo(self, prevout):
hash160 = self.get_hash160(prevout.hash, prevout.n)
if hash160 is None:
# This indicates a successful spend of a non-standard script
# self.logger.info('ignoring spend of non-standard UTXO {}/{:d} '
# 'at height {:d}'
# .format(bytes(reversed(prevout.hash)).hex(),
# prevout.n, self.height))
return None
key = (b'u' + hash160 + prevout.hash[:UTXO_TX_HASH_LEN]
+ struct.pack('<H', prevout.n))
data = self.get(key)
if len(data) == 12:
(tx_num, ) = struct.unpack('<I', data[:4])
self.delete(key)
else:
# This should almost never happen
assert len(data) % (4 + 8) == 0
self.logger.info('UTXO compressed key collision at height {:d}, '
'utxo {} / {:d}'
.format(self.height, bytes(reversed(prevout.hash))
.hex(), prevout.n))
for n in range(0, len(data), 12):
(tx_num, ) = struct.unpack('<I', data[n:n+4])
tx_hash, height = self.get_tx_hash(tx_num)
if prevout.hash == tx_hash:
break
else:
raise Exception('could not resolve UTXO key collision')
data = data[:n] + data[n + 12:]
self.put(key, data)
return hash160
def put_utxo(self, tx_hash, idx, txout):
pk = ScriptPubKey.from_script(txout.pk_script, self.coin)
if not pk.hash160:
return None
pack = struct.pack
idxb = pack('<H', idx)
txcb = pack('<I', self.tx_count)
# First write the hash160 lookup
key = b'h' + tx_hash[:ADDR_TX_HASH_LEN] + idxb
# b'' avoids this annoyance: https://bugs.python.org/issue13298
value = b''.join([pk.hash160, txcb])
prior_value = self.get(key)
if prior_value: # Should almost never happen
value += prior_value
self.put(key, value)
# Next write the UTXO
key = b'u' + pk.hash160 + tx_hash[:UTXO_TX_HASH_LEN] + idxb
value = txcb + pack('<Q', txout.value)
prior_value = self.get(key)
if prior_value: # Should almost never happen
value += prior_value
self.put(key, value)
return pk.hash160
def open_file(self, filename, truncate=False, create=False):
try:
return open(filename, 'wb+' if truncate else 'rb+')
except FileNotFoundError:
if create:
return open(filename, 'wb+')
raise
def read_headers(self, height, count):
header_len = self.coin.HEADER_LEN
self.headers_file.seek(height * header_len)
return self.headers_file.read(count * header_len)
def write_headers(self):
headers = b''.join(self.headers)
header_len = self.coin.HEADER_LEN
assert len(headers) % header_len == 0
self.headers_file.seek(self.db_height * header_len)
self.headers_file.write(headers)
self.headers_file.flush()
self.headers = []
def write_tx_counts(self):
self.txcount_file.seek(self.db_height * self.tx_counts.itemsize)
self.txcount_file.write(self.tx_counts[self.db_height: self.height + 1])
self.txcount_file.flush()
def write_tx_hashes(self):
hash_blob = b''.join(itertools.chain(*self.tx_hashes))
assert len(hash_blob) % 32 == 0
assert self.tx_hash_file_size % 32 == 0
hashes = memoryview(hash_blob)
cursor = 0
file_pos = self.db_tx_count * 32
while cursor < len(hashes):
file_num, offset = divmod(file_pos, self.tx_hash_file_size)
size = min(len(hashes) - cursor, self.tx_hash_file_size - offset)
filename = 'hashes{:05d}'.format(file_num)
with self.open_file(filename, create=True) as f:
f.seek(offset)
f.write(hashes[cursor:cursor + size])
cursor += size
file_pos += size
self.tx_hashes = []
def process_block(self, block):
self.headers.append(block[:self.coin.HEADER_LEN])
tx_hashes, txs = self.coin.read_block(block)
self.height += 1
assert len(self.tx_counts) == self.height
# These both need to be updated before calling process_tx().
# It uses them for tx hash lookup
self.tx_hashes.append(tx_hashes)
self.tx_counts.append(self.tx_count + len(txs))
for tx_hash, tx in zip(tx_hashes, txs):
self.process_tx(tx_hash, tx)
# Flush if we're getting full
if len(self.write_cache) + len(self.history) > self.flush_size:
self.flush()
def process_tx(self, tx_hash, tx):
hash160s = set()
if not tx.is_coinbase:
for txin in tx.inputs:
hash160s.add(self.spend_utxo(txin.prevout))
for idx, txout in enumerate(tx.outputs):
hash160s.add(self.put_utxo(tx_hash, idx, txout))
for hash160 in hash160s:
self.history[hash160].append(self.tx_count)
self.tx_count += 1
def get_tx_hash(self, tx_num):
'''Returns the tx_hash and height of a tx number.'''
height = bisect_right(self.tx_counts, tx_num)
# Is this on disk or unflushed?
if height >= self.db_height:
tx_hashes = self.tx_hashes[height - self.db_height]
tx_hash = tx_hashes[tx_num - self.tx_counts[height - 1]]
else:
file_pos = tx_num * 32
file_num, offset = divmod(file_pos, self.tx_hash_file_size)
filename = 'hashes{:05d}'.format(file_num)
with self.open_file(filename) as f:
f.seek(offset)
tx_hash = f.read(32)
return tx_hash, height
def get_balance(self, hash160):
'''Returns the confirmed balance of an address.'''
utxos = self.get_utxos(hash_160)
return sum(utxo.value for utxo in utxos)
def get_history(self, hash160):
'''Returns a sorted list of (tx_hash, height) tuples of transactions
that touched the address, earliest in the blockchain first.
Only includes outputs that have been spent. Other
transactions will be in the UTXO set.
'''
prefix = b'H' + hash160
a = array.array('I')
for key, hist in self.db.iterator(prefix=prefix):
a.frombytes(hist)
return [self.get_tx_hash(tx_num) for tx_num in a]
def get_utxos(self, hash160):
'''Returns all UTXOs for an address sorted such that the earliest
in the blockchain comes first.
'''
unpack = struct.unpack
prefix = b'u' + hash160
utxos = []
for k, v in self.db.iterator(prefix=prefix):
(tx_pos, ) = unpack('<H', k[-2:])
for n in range(0, len(v), 12):
(tx_num, ) = unpack('<I', v[n:n+4])
(value, ) = unpack('<Q', v[n+4:n+12])
tx_hash, height = self.get_tx_hash(tx_num)
utxos.append(UTXO(tx_num, tx_pos, tx_hash, height, value))
# Sorted by height and block position.
return sorted(utxos)

54
server/env.py

@ -0,0 +1,54 @@
# See the file "COPYING" for information about the copyright
# and warranty status of this software.
import logging
from os import environ
from lib.coins import Coin
class Env(object):
'''Wraps environment configuration.'''
class Error(Exception):
pass
def __init__(self):
self.logger = logging.getLogger('Env')
self.logger.setLevel(logging.INFO)
coin_name = self.default('COIN', 'Bitcoin')
network = self.default('NETWORK', 'mainnet')
self.coin = Coin.lookup_coin_class(coin_name, network)
self.db_dir = self.required('DB_DIRECTORY')
self.flush_size = self.integer('FLUSH_SIZE', 1000000)
self.rpc_url = self.build_rpc_url()
def default(self, envvar, default):
return environ.get(envvar, default)
def required(self, envvar):
value = environ.get(envvar)
if value is None:
raise self.Error('required envvar {} not set'.format(envvar))
return value
def integer(self, envvar, default):
value = environ.get(envvar)
if value is None:
return default
try:
return int(value)
except:
raise self.Error('cannot convert envvar {} value {} to an integer'
.format(envvar, value))
def build_rpc_url(self):
rpc_url = environ.get('RPC_URL')
if not rpc_url:
rpc_username = self.required('RPC_USERNAME')
rpc_password = self.required('RPC_PASSWORD')
rpc_host = self.required('RPC_HOST')
rpc_port = self.default('RPC_PORT', self.coin.DEFAULT_RPC_PORT)
rpc_url = ('http://{}:{}@{}:{}/'
.format(rpc_username, rpc_password, rpc_host, rpc_port))
return rpc_url

232
server/server.py

@ -0,0 +1,232 @@
# See the file "COPYING" for information about the copyright
# and warranty status of this software.
import asyncio
import json
import logging
import os
import signal
from functools import partial
import aiohttp
from server.db import DB
class Server(object):
def __init__(self, env, loop):
self.env = env
self.db = DB(env)
self.rpc = RPC(env)
self.block_cache = BlockCache(env, self.db, self.rpc, loop)
def async_tasks(self):
return [
asyncio.ensure_future(self.block_cache.catch_up()),
asyncio.ensure_future(self.block_cache.process_cache()),
]
class BlockCache(object):
'''Requests blocks ahead of time from the daemon. Serves them
to the blockchain processor.'''
def __init__(self, env, db, rpc, loop):
self.logger = logging.getLogger('BlockCache')
self.logger.setLevel(logging.INFO)
self.db = db
self.rpc = rpc
self.stop = False
# Cache target size is in MB. Has little effect on sync time.
self.cache_limit = 10
self.daemon_height = 0
self.fetched_height = db.db_height
# Blocks stored in reverse order. Next block is at end of list.
self.blocks = []
self.recent_sizes = []
self.ave_size = 0
for signame in ('SIGINT', 'SIGTERM'):
loop.add_signal_handler(getattr(signal, signame),
partial(self.on_signal, signame))
def on_signal(self, signame):
logging.warning('Received {} signal, preparing to shut down'
.format(signame))
self.blocks = []
self.stop = True
async def process_cache(self):
while not self.stop:
await asyncio.sleep(1)
while self.blocks:
self.db.process_block(self.blocks.pop())
# Release asynchronous block fetching
await asyncio.sleep(0)
self.db.flush()
async def catch_up(self):
self.logger.info('catching up, block cache limit {:d}MB...'
.format(self.cache_limit))
while await self.maybe_prefill():
await asyncio.sleep(1)
if not self.stop:
self.logger.info('caught up to height {:d}'
.format(self.daemon_height))
def cache_used(self):
return sum(len(block) for block in self.blocks)
def prefill_count(self, room):
count = 0
if self.ave_size:
count = room // self.ave_size
return max(count, 10)
async def maybe_prefill(self):
'''Returns False to stop. True to sleep a while for asynchronous
processing.'''
cache_limit = self.cache_limit * 1024 * 1024
while True:
if self.stop:
return False
cache_used = self.cache_used()
if cache_used > cache_limit:
return True
# Keep going by getting a whole new cache_limit of blocks
self.daemon_height = await self.rpc.rpc_single('getblockcount')
max_count = min(self.daemon_height - self.fetched_height, 4000)
count = min(max_count, self.prefill_count(cache_limit))
if not count or self.stop:
return False # Done catching up
# self.logger.info('requesting {:,d} blocks'.format(count))
first = self.fetched_height + 1
param_lists = [[height] for height in range(first, first + count)]
hashes = await self.rpc.rpc_multi('getblockhash', param_lists)
if self.stop:
return False
# Hashes is an array of hex strings
param_lists = [(h, False) for h in hashes]
blocks = await self.rpc.rpc_multi('getblock', param_lists)
self.fetched_height += count
if self.stop:
return False
# Convert hex string to bytes and put in memoryview
blocks = [memoryview(bytes.fromhex(block)) for block in blocks]
# Reverse order and place at front of list
self.blocks = list(reversed(blocks)) + self.blocks
self.logger.info('prefilled {:,d} blocks to height {:,d} '
'daemon height: {:,d} block cache size: {:,d}'
.format(count, self.fetched_height,
self.daemon_height, self.cache_used()))
# Keep 50 most recent block sizes for fetch count estimation
sizes = [len(block) for block in blocks]
self.recent_sizes.extend(sizes)
excess = len(self.recent_sizes) - 50
if excess > 0:
self.recent_sizes = self.recent_sizes[excess:]
self.ave_size = sum(self.recent_sizes) // len(self.recent_sizes)
class RPC(object):
def __init__(self, env):
self.logger = logging.getLogger('RPC')
self.logger.setLevel(logging.INFO)
self.rpc_url = env.rpc_url
self.logger.info('using RPC URL {}'.format(self.rpc_url))
async def rpc_multi(self, method, param_lists):
payload = [{'method': method, 'params': param_list}
for param_list in param_lists]
while True:
dresults = await self.daemon(payload)
errs = [dresult['error'] for dresult in dresults]
if not any(errs):
return [dresult['result'] for dresult in dresults]
for err in errs:
if err.get('code') == -28:
self.logger.warning('daemon still warming up...')
secs = 10
break
else:
self.logger.error('daemon returned errors: {}'.format(errs))
secs = 0
self.logger.info('sleeping {:d} seconds and trying again...'
.format(secs))
await asyncio.sleep(secs)
async def rpc_single(self, method, params=None):
payload = {'method': method}
if params:
payload['params'] = params
while True:
dresult = await self.daemon(payload)
err = dresult['error']
if not err:
return dresult['result']
if err.get('code') == -28:
self.logger.warning('daemon still warming up...')
secs = 10
else:
self.logger.error('daemon returned error: {}'.format(err))
secs = 0
self.logger.info('sleeping {:d} seconds and trying again...'
.format(secs))
await asyncio.sleep(secs)
async def daemon(self, payload):
while True:
try:
async with aiohttp.ClientSession() as session:
async with session.post(self.rpc_url,
data=json.dumps(payload)) as resp:
return await resp.json()
except Exception as e:
self.logger.error('aiohttp error: {}'.format(e))
self.logger.info('sleeping 1 second and trying again...')
await asyncio.sleep(1)
# for addr in [
# # '1dice8EMZmqKvrGE4Qc9bUFf9PX3xaYDp',
# # '1HYBcza9tVquCCvCN1hUZkYT9RcM6GfLot',
# # '1BNwxHGaFbeUBitpjy2AsKpJ29Ybxntqvb',
# # '1ARanTkswPiVM6tUEYvbskyqDsZpweiciu',
# # '1VayNert3x1KzbpzMGt2qdqrAThiRovi8',
# # '1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa',
# # '1XPTgDRhN8RFnzniWCddobD9iKZatrvH4',
# # '153h6eE6xRhXuN3pE53gWVfXacAtfyBF8g',
# ]:
# print('Address: ', addr)
# hash160 = coin.address_to_hash160(addr)
# utxos = self.db.get_utxos(hash160)
# for n, utxo in enumerate(utxos):
# print('UTXOs #{:d}: hash: {} pos: {:d} height: {:d} value: {:d}'
# .format(n, bytes(reversed(utxo.tx_hash)).hex(),
# utxo.tx_pos, utxo.height, utxo.value))
# for addr in [
# '19k8nToWwMGuF4HkNpzgoVAYk4viBnEs5D',
# '1HaHTfmvoUW6i6nhJf8jJs6tU4cHNmBQHQ',
# '1XPTgDRhN8RFnzniWCddobD9iKZatrvH4',
# ]:
# print('Address: ', addr)
# hash160 = coin.address_to_hash160(addr)
# for n, (tx_hash, height) in enumerate(self.db.get_history(hash160)):
# print('History #{:d}: hash: {} height: {:d}'
# .format(n + 1, bytes(reversed(tx_hash)).hex(), height))

48
server_main.py

@ -0,0 +1,48 @@
#!/usr/bin/env python3
# See the file "COPYING" for information about the copyright
# and warranty status of this software.
import asyncio
import logging
import os
import traceback
from server.env import Env
from server.server import Server
def main_loop():
'''Get tasks; loop until complete.'''
if os.geteuid() == 0:
raise Exception('DO NOT RUN AS ROOT! Create an unpriveleged user '
'account and use that')
env = Env()
logging.info('switching current directory to {}'.format(env.db_dir))
os.chdir(env.db_dir)
loop = asyncio.get_event_loop()
try:
server = Server(env, loop)
tasks = server.async_tasks()
loop.run_until_complete(asyncio.gather(*tasks))
finally:
loop.close()
def main():
'''Set up logging, enter main loop.'''
logging.basicConfig(level=logging.INFO)
logging.info('ElectrumX server starting')
try:
main_loop()
except Exception:
traceback.print_exc()
logging.critical('ElectrumX server terminated abnormally')
else:
logging.info('ElectrumX server terminated normally')
if __name__ == '__main__':
main()
Loading…
Cancel
Save