Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support work over proxy HTTP/HTTPS/SOCKS #52

Closed
iwojim0 opened this issue Dec 25, 2023 · 7 comments
Closed

support work over proxy HTTP/HTTPS/SOCKS #52

iwojim0 opened this issue Dec 25, 2023 · 7 comments

Comments

@iwojim0
Copy link
Contributor

iwojim0 commented Dec 25, 2023

I test working dtn7-rs in i2p overlay network and see these messages on stdout:
root@092f12c73cf8:/# export http_proxy=http://127.0.0.1:4444;export https_proxy=http://127.0.0.1:4444;dtnd -n dtn://node1 -r epidemic -C http -e incoming -w 3000 --disable_nd -s http://node2.b32.i2p:3000/node2 -d
2023-12-25T17:38:32.096Z INFO dtnd > starting dtnd
2023-12-25T17:38:32.096Z INFO dtn7::dtnd::daemon > Local Node ID: dtn://node1/
2023-12-25T17:38:32.096Z INFO dtn7::dtnd::daemon > Work Dir: "/"
2023-12-25T17:38:32.096Z INFO dtn7::dtnd::daemon > DB Backend: mem
2023-12-25T17:38:32.096Z INFO dtn7::dtnd::daemon > Announcement Interval: 2s
2023-12-25T17:38:32.096Z INFO dtn7::dtnd::daemon > Janitor Interval: 10s
2023-12-25T17:38:32.096Z INFO dtn7::dtnd::daemon > Peer Timeout: 20s
2023-12-25T17:38:32.096Z INFO dtn7::dtnd::daemon > Web Port: 3000
2023-12-25T17:38:32.096Z INFO dtn7::dtnd::daemon > IPv4: true
2023-12-25T17:38:32.096Z INFO dtn7::dtnd::daemon > IPv6: false
2023-12-25T17:38:32.096Z INFO dtn7::dtnd::daemon > Generate Status Reports: false
2023-12-25T17:38:32.097Z INFO dtn7::dtnd::daemon > RoutingAgent: epidemic
2023-12-25T17:38:32.097Z INFO dtn7::dtnd::daemon > RoutingOptions: {}
2023-12-25T17:38:32.097Z INFO dtn7::dtnd::daemon > Adding CLA: HttpConvergenceLayer
2023-12-25T17:38:32.097Z INFO dtn7::dtnd::daemon > Adding static peer: http://node2.b32.i2p/node2
2023-12-25T17:38:32.097Z INFO dtn7::core > Registered new application agent for EID: dtn://node1/
2023-12-25T17:38:32.097Z INFO dtn7::core > Registered new application agent for EID: dtn://node1/incoming
2023-12-25T17:38:32.097Z INFO dtn7::dtnd::daemon > Starting convergency layers
2023-12-25T17:38:32.097Z INFO dtn7::dtnd::daemon > Setup http:3000
2023-12-25T17:38:32.098Z DEBUG dtn7::dtnd::janitor > running janitor
2023-12-25T17:38:32.098Z DEBUG dtn7::dtnd::janitor > cleaning up peers
2023-12-25T17:38:32.098Z DEBUG dtn7::dtnd::janitor > reprocessing bundles
2023-12-25T17:38:32.098Z DEBUG dtn7::core > time to process 0 bundles: 9.916µs
2023-12-25T17:38:42.099Z DEBUG dtn7::dtnd::janitor > running janitor
2023-12-25T17:38:42.099Z DEBUG dtn7::dtnd::janitor > cleaning up peers
2023-12-25T17:38:42.099Z DEBUG dtn7::dtnd::janitor > reprocessing bundles
2023-12-25T17:38:42.099Z DEBUG dtn7::core > time to process 0 bundles: 16.624µs
2023-12-25T17:38:45.212Z DEBUG dtn7::dtnd::httpd > Received for sending: 11
2023-12-25T17:38:45.212Z DEBUG dtn7::dtnd::httpd > Sending bundle dtn://node1/-756841125212-0 to dtn://node2/incoming
2023-12-25T17:38:45.212Z DEBUG dtn7::core::store::mem > inserting bundle dtn://node1/-756841125212-0 in to store
2023-12-25T17:38:45.212Z INFO dtn7::core::processing > Transmission of bundle requested: dtn://node1/-756841125212-0
2023-12-25T17:38:45.213Z INFO dtn7::core::processing > Dispatching bundle: dtn://node1/-756841125212-0
2023-12-25T17:38:45.213Z DEBUG dtn7::core::store::mem > get_bundle dtn://node1/-756841125212-0
2023-12-25T17:38:45.213Z DEBUG dtn7::routing::epidemic > Attempting direct delivery of bundle dtn://node1/-756841125212-0 to node2
2023-12-25T17:38:45.213Z DEBUG dtn7::core::processing > Attempting forwarding of dtn://node1/-756841125212-0 to nodes: [ClaSenderTask { tx: Sender { chan: Tx { inner: Chan { tx: Tx { block_tail: 0x55a0e92f80, tail_position: 0 }, semaphore: Semaphore { semaphore: Semaphore { permits: 100 }, bound: 100 }, rx_waker: AtomicWaker, tx_count: 2, rx_fields: "..." } } }, dest: "node2.b32.i2p:3000", cla_name: "http", next_hop: Dtn(1, DtnAddress("//node2/")) }]
2023-12-25T17:38:45.213Z DEBUG dtn7::core::processing > Attempting forwarding of dtn://node1/-756841125212-0 to nodes: [ClaSenderTask { tx: Sender { chan: Tx { inner: Chan { tx: Tx { block_tail: 0x55a0e92f80, tail_position: 0 }, semaphore: Semaphore { semaphore: Semaphore { permits: 100 }, bound: 100 }, rx_waker: AtomicWaker, tx_count: 2, rx_fields: "..." } } }, dest: "node2.b32.i2p:3000", cla_name: "http", next_hop: Dtn(1, DtnAddress("//node2/")) }]
2023-12-25T17:38:45.213Z DEBUG dtn7::core::store::mem > get_bundle dtn://node1/-756841125212-0
2023-12-25T17:38:45.213Z DEBUG dtn7::core::processing > Bundle contains an hop count block: dtn://node1/-756841125212-0 32 1
2023-12-25T17:38:45.213Z DEBUG dtn7::core::processing > Sending bundle to a CLA: dtn://node1/-756841125212-0 node2. b32.i2p:3000 http
2023-12-25T17:38:45.213Z DEBUG dtn7::cla::http > HttpConvergenceLayer: received transfer command for node2.b32.i2p:3000
thread 'tokio-runtime-worker' panicked at 'called Result::unwrap() on an Err value: AddrParseError(Socket)', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/dtn7-0.19.0/src/cla/http.rs:28:51 note: run with RUST_BACKTRACE=1 environment variable to display a backtrace
2023-12-25T17:38:45.218Z INFO dtn7::core::processing > Sending bundle dtn://node1/-756841125212-0 via http to node2.b32.i2p:3000 (dtn://node2/) failed after 4.924685ms
2023-12-25T17:38:45.218Z DEBUG dtn7::core::processing > Error while transferring bundle dtn://node1/-756841125212-0: channel closed
2023-12-25T17:38:45.218Z DEBUG dtn7::core::processing > Reporting failed sending to peer: node2
2023-12-25T17:38:45.218Z INFO dtn7::core::processing > Failed to forward bundle to any CLA: dtn://node1/-756841125212-0

It look that http request tried goes direct, not over configured proxy environment variables.
Do you have any configuration way in dtn7-rs for fix it?
If no config parameters - ok, that it looks as a feature request:)
Because transmission messages over overlay networks, same as i2p/tor, needed support HTTP/SOCKS proxy

@gh0st42
Copy link
Member

gh0st42 commented Dec 28, 2023

So far, we have not tested any of the CLAs using proxies but using tor as a transport was discussed a few times.
As our underlying HTTP client library does not respect the standard environment variables we currently have no way to configure proxies.

But did you try any LD_PRELOAD hacks and tools such as torify, socksify and similar proxy wrappers?

Feedback on success and also the performance of the different CLAs over an overlay network is very welcome! Either here, in the Issue Tracker, or in Discussions on GitHub or our Matrix channel.

@gh0st42
Copy link
Member

gh0st42 commented Dec 29, 2023

Also, I just checked the various CLAs.
While httppull is not the most speedy one, it is based upon the reqwest lib, which claims to respect system proxy settings. Did you try this one as well or just the regular http CL?

I would also be very interested in your system to distribute participating nodes via i2p to all the dtn instances.
There is an upcoming feature which will be merged into the next release, which allows dynamically adding static and dynamic peers via the rest interface. This should come in handy for such a setup.

@iwojim0
Copy link
Contributor Author

iwojim0 commented Jan 3, 2024

yep, thanks.
sure, httppull use reqwest which have native support HTTP_PROXY environment variable.
however, within httppull.rs uses PeerAddress::Ip(ipaddr) call which in case of i2p/onion will work not properly, will not resolve correct ip address.
i'm implemented a bit changes to httppull.rs (attached)
these changes work for my configuration and messages transmit beatween nodes in different networks :
node1 configured for using i2p only (dtn7-node1.b32.i2p), upstream http proxy (privoxy) configured for works with i2p only
node2 configured for using i2p and onion (dtn7-node2.b32.i2p,dtn7-node2.onion), upstream http proxy configured for works with i2p and onion
node3 configured for using onion only (dtn7-node3.onion), upstream http proxy configured for works with onion only
peers for node1
[statics]
peers = [
"http://node2.b32.i2p:3000/node2",
"http://node3.onion:3000/node3"
]
peers for node2
[statics]
peers = [
"http://node1.b32.i2p:3000/node1",
"http://node3.onion:3000/node3"
]
peers for node3
[statics]
peers = [
"http://node1.b32.i2p:3000/node1",
"http://node2.onion:3000/node2"
]
i'm not rust programmer and this patch may be not fully cirrect from general dtn architecture perspective or for some other use cases (for ex i'm not checked work for clearnet adresses)
please check all before include patch in release.

dynamically adding static and dynamic peers via the rest interface.

it cool!:)

working_httppull_cla_with_support_i2p_onion_for_dtn7-rs_v.0.19.0.patch.gz

@gh0st42
Copy link
Member

gh0st42 commented Jan 4, 2024

Glad that this works now!
Would you mind making a PR for your patch?
I'll have to play around with this a bit but I think we can merge something that can provide this functionality.

But we'll still keep proxy support in mind also for the other convergence layers.

@gh0st42
Copy link
Member

gh0st42 commented Feb 27, 2024

Sorry, I accidentally closed the PR, can you please reopen it?
Have some other stuff to merge in and then will add your changes after some testing.

@iwojim0
Copy link
Contributor Author

iwojim0 commented Mar 9, 2024

sorry for late response - updated my fork to release v0.20.1 and corrected patch for it. now patch fully functional for v0.20.1
PR: #63

gh0st42 pushed a commit that referenced this issue Mar 14, 2024
@gh0st42
Copy link
Member

gh0st42 commented Mar 14, 2024

I merged your PR and just release v0.20.2 with your changes.
There are also docker images available of the most recent builds:
docker.io/gh0st42/dtn7:bookworm
docker.io/gh0st42/dtn7:alpine

@gh0st42 gh0st42 closed this as completed Mar 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants