-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support work over proxy HTTP/HTTPS/SOCKS #52
Comments
So far, we have not tested any of the CLAs using proxies but using tor as a transport was discussed a few times. But did you try any LD_PRELOAD hacks and tools such as torify, socksify and similar proxy wrappers? Feedback on success and also the performance of the different CLAs over an overlay network is very welcome! Either here, in the Issue Tracker, or in Discussions on GitHub or our Matrix channel. |
Also, I just checked the various CLAs. I would also be very interested in your system to distribute participating nodes via i2p to all the dtn instances. |
yep, thanks.
it cool!:) working_httppull_cla_with_support_i2p_onion_for_dtn7-rs_v.0.19.0.patch.gz |
Glad that this works now! But we'll still keep proxy support in mind also for the other convergence layers. |
Sorry, I accidentally closed the PR, can you please reopen it? |
sorry for late response - updated my fork to release v0.20.1 and corrected patch for it. now patch fully functional for v0.20.1 |
I merged your PR and just release v0.20.2 with your changes. |
I test working dtn7-rs in i2p overlay network and see these messages on stdout:
root@092f12c73cf8:/# export http_proxy=http://127.0.0.1:4444;export https_proxy=http://127.0.0.1:4444;dtnd -n dtn://node1 -r epidemic -C http -e incoming -w 3000 --disable_nd -s http://node2.b32.i2p:3000/node2 -d
2023-12-25T17:38:32.096Z INFO dtnd > starting dtnd
2023-12-25T17:38:32.096Z INFO dtn7::dtnd::daemon > Local Node ID: dtn://node1/
2023-12-25T17:38:32.096Z INFO dtn7::dtnd::daemon > Work Dir: "/"
2023-12-25T17:38:32.096Z INFO dtn7::dtnd::daemon > DB Backend: mem
2023-12-25T17:38:32.096Z INFO dtn7::dtnd::daemon > Announcement Interval: 2s
2023-12-25T17:38:32.096Z INFO dtn7::dtnd::daemon > Janitor Interval: 10s
2023-12-25T17:38:32.096Z INFO dtn7::dtnd::daemon > Peer Timeout: 20s
2023-12-25T17:38:32.096Z INFO dtn7::dtnd::daemon > Web Port: 3000
2023-12-25T17:38:32.096Z INFO dtn7::dtnd::daemon > IPv4: true
2023-12-25T17:38:32.096Z INFO dtn7::dtnd::daemon > IPv6: false
2023-12-25T17:38:32.096Z INFO dtn7::dtnd::daemon > Generate Status Reports: false
2023-12-25T17:38:32.097Z INFO dtn7::dtnd::daemon > RoutingAgent: epidemic
2023-12-25T17:38:32.097Z INFO dtn7::dtnd::daemon > RoutingOptions: {}
2023-12-25T17:38:32.097Z INFO dtn7::dtnd::daemon > Adding CLA: HttpConvergenceLayer
2023-12-25T17:38:32.097Z INFO dtn7::dtnd::daemon > Adding static peer: http://node2.b32.i2p/node2
2023-12-25T17:38:32.097Z INFO dtn7::core > Registered new application agent for EID: dtn://node1/
2023-12-25T17:38:32.097Z INFO dtn7::core > Registered new application agent for EID: dtn://node1/incoming
2023-12-25T17:38:32.097Z INFO dtn7::dtnd::daemon > Starting convergency layers
2023-12-25T17:38:32.097Z INFO dtn7::dtnd::daemon > Setup http:3000
2023-12-25T17:38:32.098Z DEBUG dtn7::dtnd::janitor > running janitor
2023-12-25T17:38:32.098Z DEBUG dtn7::dtnd::janitor > cleaning up peers
2023-12-25T17:38:32.098Z DEBUG dtn7::dtnd::janitor > reprocessing bundles
2023-12-25T17:38:32.098Z DEBUG dtn7::core > time to process 0 bundles: 9.916µs
2023-12-25T17:38:42.099Z DEBUG dtn7::dtnd::janitor > running janitor
2023-12-25T17:38:42.099Z DEBUG dtn7::dtnd::janitor > cleaning up peers
2023-12-25T17:38:42.099Z DEBUG dtn7::dtnd::janitor > reprocessing bundles
2023-12-25T17:38:42.099Z DEBUG dtn7::core > time to process 0 bundles: 16.624µs
2023-12-25T17:38:45.212Z DEBUG dtn7::dtnd::httpd > Received for sending: 11
2023-12-25T17:38:45.212Z DEBUG dtn7::dtnd::httpd > Sending bundle dtn://node1/-756841125212-0 to dtn://node2/incoming
2023-12-25T17:38:45.212Z DEBUG dtn7::core::store::mem > inserting bundle dtn://node1/-756841125212-0 in to store
2023-12-25T17:38:45.212Z INFO dtn7::core::processing > Transmission of bundle requested: dtn://node1/-756841125212-0
2023-12-25T17:38:45.213Z INFO dtn7::core::processing > Dispatching bundle: dtn://node1/-756841125212-0
2023-12-25T17:38:45.213Z DEBUG dtn7::core::store::mem > get_bundle dtn://node1/-756841125212-0
2023-12-25T17:38:45.213Z DEBUG dtn7::routing::epidemic > Attempting direct delivery of bundle dtn://node1/-756841125212-0 to node2
2023-12-25T17:38:45.213Z DEBUG dtn7::core::processing > Attempting forwarding of dtn://node1/-756841125212-0 to nodes: [ClaSenderTask { tx: Sender { chan: Tx { inner: Chan { tx: Tx { block_tail: 0x55a0e92f80, tail_position: 0 }, semaphore: Semaphore { semaphore: Semaphore { permits: 100 }, bound: 100 }, rx_waker: AtomicWaker, tx_count: 2, rx_fields: "..." } } }, dest: "node2.b32.i2p:3000", cla_name: "http", next_hop: Dtn(1, DtnAddress("//node2/")) }]
2023-12-25T17:38:45.213Z DEBUG dtn7::core::processing > Attempting forwarding of dtn://node1/-756841125212-0 to nodes: [ClaSenderTask { tx: Sender { chan: Tx { inner: Chan { tx: Tx { block_tail: 0x55a0e92f80, tail_position: 0 }, semaphore: Semaphore { semaphore: Semaphore { permits: 100 }, bound: 100 }, rx_waker: AtomicWaker, tx_count: 2, rx_fields: "..." } } }, dest: "node2.b32.i2p:3000", cla_name: "http", next_hop: Dtn(1, DtnAddress("//node2/")) }]
2023-12-25T17:38:45.213Z DEBUG dtn7::core::store::mem > get_bundle dtn://node1/-756841125212-0
2023-12-25T17:38:45.213Z DEBUG dtn7::core::processing > Bundle contains an hop count block: dtn://node1/-756841125212-0 32 1
2023-12-25T17:38:45.213Z DEBUG dtn7::core::processing > Sending bundle to a CLA: dtn://node1/-756841125212-0 node2. b32.i2p:3000 http
2023-12-25T17:38:45.213Z DEBUG dtn7::cla::http > HttpConvergenceLayer: received transfer command for node2.b32.i2p:3000
thread 'tokio-runtime-worker' panicked at 'called
Result::unwrap()
on anErr
value: AddrParseError(Socket)', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/dtn7-0.19.0/src/cla/http.rs:28:51 note: run withRUST_BACKTRACE=1
environment variable to display a backtrace2023-12-25T17:38:45.218Z INFO dtn7::core::processing > Sending bundle dtn://node1/-756841125212-0 via http to node2.b32.i2p:3000 (dtn://node2/) failed after 4.924685ms
2023-12-25T17:38:45.218Z DEBUG dtn7::core::processing > Error while transferring bundle dtn://node1/-756841125212-0: channel closed
2023-12-25T17:38:45.218Z DEBUG dtn7::core::processing > Reporting failed sending to peer: node2
2023-12-25T17:38:45.218Z INFO dtn7::core::processing > Failed to forward bundle to any CLA: dtn://node1/-756841125212-0
It look that http request tried goes direct, not over configured proxy environment variables.
Do you have any configuration way in dtn7-rs for fix it?
If no config parameters - ok, that it looks as a feature request:)
Because transmission messages over overlay networks, same as i2p/tor, needed support HTTP/SOCKS proxy
The text was updated successfully, but these errors were encountered: