Remove interface communication out of blockchain.py
into network.py. network.py handles protocol requests
for headers and chunks. blockchain.py continues to
handle their analysis and verification.
If an interface provides a header chain that doesn't
connect, it is dismissed, as per a previous TODO comment.
This removes a thread and another source of timeouts.
I see no performance issues with this when truncating the
blockchain.
Rename 'result' to 'header' for clarity.
This allows us to distinguish between connecting and connected
state in interface.py (used to be done in network.py but that
had other issues).
This means we don't switch to a connecting server, and get_interfaces()
does not report connecting ones.
interfaces we have created.
Existing code has the concept of pending servers, where a connection
thread is started but has not sent a connection notification, and
and interfaces which have received the notification.
This separation caused a couple of minor bugs, and given the cleaner
semantics of unifying the two I don't think the separation is beneficial.
The bugs:
1) When stopping the network, we only stopped the connected interface
threads, not the pending ones. This would leave Python hanging
on exit if we don't make them daemon threads.
2) start_interface() did not check pending servers before starting
a new thread. Some of its callers did, but not all, so it was
possible to initiate two threads to one server and "lose" one thread.
Apart form fixing the above two issues, unification causes one more
change in semantics: we are now willing to switch to a connection
that is pending (we don't switch to failed interfaces). I don't
think that is a problem: if it times out we'll just switch
again when we receive the disconnect notification, and previously the
fact that an interface was in the interaces dictionary wasn't a
guarantee the connection was good anyway: we might not have processed
a pending disconnection notification.
Three places used to test for lagging and switch. Commonize the
code. We test for lagging before autoconnect so lagging
diagnostics are output if not auto-connect.
Lagging diagnotics moved to server_is_lagging().
1) For new proxy or protocol, restart the network and default to
the requested server.
2) Otherwise if we aren't using the requested server, switch to it
3) Otherwise choose a random server if the requested server is
stale and auto_connect is True
As switch_to_interface() now has another user, move the logic
there whereby we close the old interface in order to terminate
subscriptions, in order to have it in one place.
switch_to_random_interface used to do is_connected()
checks and remove interfaces when switching. This
is redundant: interfaces are only added to self.interfaces
once in is_connected() state from process_if_notification().
When they are disconnected, a notification is also received
and process_if_notification() performs the removal. Furthermore,
two other callers of switch_to_interface do not do explicit
is_connected() checks before calling it.
Traceback (most recent call last):
File "src/electrum/lib/network.py", line 411, in process_request
out['result'] = f(*params)
UnboundLocalError: local variable 'f' referenced before assignment
[Network] network error local variable 'f' referenced before assignment
switch_to_interface() becomes the common place where
self.interface is set; therefore self.interface is not None
precisely when self.default_server is connected. Previously some
places required it to be connected and some didn't. Also an
interface change now sends the 'updated' notification
consistently - previously some did and some didn't.
Have network_start() call start_interfaces() - this also means
network restarts now do this.
Fix apparent off-by-one in start_interfaces()
Prefer to use safer self.is_connected() as it checks interface
is not None.
Using common code gives small observable changes in behaviour:
- slightly different print_error() messages
- network restarts now enter status 'connecting' which they
didn't before, which seems correct
- status 'connecting' is done with set_status() rather than
simply assigning the status, which seems more correct. Now
that the response_queue is available in the constructor this
works; it used to fail with 'response_queue is not defined'
This is somewhat cleaner as the proxy's pipe and network setup
was awkwardly interleaved. It also means network's constructor
is free to use both; currently some code is working around the
fact that the response queue doesn't exist in the constructor.
We store it in the config object instead of in the blockchain object.
The blockchain object now refers to its config, and calls refresh_height() to update it.
The network objects also refer to the config rather than the blockchain.
This is the first of many small steps to untangle the verifier from stored state and so
permit the history tab to work in offline mode. The refactoring will simultaneously clean
up a lot of accumulated cruft.