Previously network.py had its own idea of request IDs,
and each interface had its own which was sent on the wire.
The interface would jump through hoops to translate one
to the other.
This unifies them so that a message ID is passed when
queueing a request, in addition to the method and params.
network.py is now solely responsible for message ID management.
Apart from being simpler and clearer, this also should be faster
as there is much less data structure manipulation and rebuilding
happening.
Occurred when switching interfaces and there were unanswered
requests that need resending. This bug isn't new; it's been
there since at least 3rd June.
The existing pattern matching code:
val.find('*.') == 0 and name.find(val[1:]) + len(val[1:]) == len(name)
will return True in the following case:
val = '*.host.com'
name = 'blah.org'
since string.find() will return -1, len(val[1:]) == 9 and len(name) == 8.
This is a layering violation - the SocketPipe doesn't own
the socket and provides no other way to close the socket, leading
to unnecessary complexity like that in interface.py.
I looked at deamon.py and NetworkProxy - the two other users,
and they don't close the sockets explicitly, just let them be
garbage collected.
Set timeout and socket options on all simple sockets. At present
some code paths can miss it, such as when the SSL certificate is
CA-signed.
Add a missing check for failure.
This allows us to distinguish between connecting and connected
state in interface.py (used to be done in network.py but that
had other issues).
This means we don't switch to a connecting server, and get_interfaces()
does not report connecting ones.
First, close the socket from the thread itself rather than from
the stop() function. This prevents another thread closing the
socket that the interface thread is simultaneously using.
Second, it occasionally would happen that the parent thread such as
network.py start() an interface, do a send_request() and timeout
waiting for a response (timeouts are 0.1s). It would check
is_connected(), get False, and assume the connection has failed.
In fact the thread hadn't even been scheduled or gotten around to
completing the socket connection. Fix by having self.connected
start out True. If the connection fails or times out, we set
connected to False soon enough.
Finally for correctness we need to deepcopy a send_request() rather
than take a reference to it.
Currently requests are sent from the requestor's thread. The lock is
not properly held where necessary so this is not thread-safe. For example
it can race with the thread stopping and closing the socket the
requestor is trying to use to send with.
Resolve such races by having send_request() simply queue the requests,
which are asynchronously sent from the interface thread itself.
First step in cleaning up the run() function.
Calls stop() rather than setting is_connected to False on
case of timeout, which cleanly closes the socket.