Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Static pre-shared key mode (for client only so far), add a mirage-router unikernel #37

Merged
merged 31 commits into from
Apr 5, 2020

Conversation

hannesm
Copy link
Contributor

@hannesm hannesm commented Mar 28, 2020

this fixes #11

it is atm only the client that is implemented, and only udp mode so far. there's as well some code duplication between handle_client and handle_static_client (in respect to resolving and connecting state machine) which should be cleaned up before merge. The compression is set to true (should be looked up in the config). It may be worth to support OCC (as mentioned in #11), though openvpn seems to deal fine without (ilkely makes a difference if configuration mismatches or for MTU adjustments).

I tested with a openvpn server running on my laptop, and a mirage-client unikernel running as virtual machine. I can successfully ping (ICMP echo request & reply) from my laptop the unikernel on its openvpn-tunneled IP address. The unikernel replies with ICMP echo reply.

@hannesm
Copy link
Contributor Author

hannesm commented Mar 31, 2020

this is getting better:

  • more code is shared for the resolve + connect state machine logic between tls and static key client (there may be more to clean up in the future, but e.g. the timeout logic is slightly subtle in respect to channels and protocols)
  • the compress is figured out from configuration, not hardcoded to true
  • missing is tcp support (though session.protocol is already enhanced with the right one -- should be simple enough to add / consume the 2-byte prefix if tcp is used (and potentially fill linger)

I used the previous version (morally cea472f) for ~48 hours without an issue (on my laptop, connected via wlan and a DSL uplink which is flaky -- it worked throughout disconnects and new IP address very fine), and am now running 1de4cb2 since ~1 hour without issues. I plan to deploy 1de4cb2 on my server.

so, feedback (with the above described limitations in mind) would be great on this PR @cfcs :) while working on this, I discovered our incoming_data does not check for monotonicity / linearity of the packet ids, should it? i.e. at the moment it seems we're open for replay attacks of the data packets (does that matter?) -- if the answer is yes, we should, the next question would be: should we allow a window, or only accept strictly increasing numbers there? i.e. if data packets arrive out of order (1, 3, 2), should 3 be rejected? should 2 be rejected?

Copy link
Contributor

@cfcs cfcs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, left feedback.

Re: Replays: I think we should definitely support out-of-order receiving for UDP. For TCP I'm not so sure, but I could imagine a multi-core implementation might send out-of-order on TCP.
As for how how to do replay detection, Maybe a sliding window of where we keep track of the last n messages could work?

@cfcs
Copy link
Contributor

cfcs commented Apr 2, 2020

Here's an (untested) attempt at a space-efficient sliding window algorithm to recognize repeated packet ids. The window size is (Sys.int_size -1) (so 62). No idea if that's sufficient.

module Sliding_window : sig
  type t
  val make : int -> t
  val add : int -> t -> (t, string) result
  val mem : t -> int -> bool
  val pp : Format.formatter -> t -> unit
end = struct
  type t = { window: int; counter: int; }
  let make counter = { window = 1; counter }
  let add n {window; counter} =
    if n <= counter then begin
      let diff = counter -n in
      if diff >= Sys.int_size
      then Error "n not in sliding window"
      else Ok {window = window lor (1 lsl diff); counter}
    end else begin (* counter > n; always succeeds *)
      let diff = n - counter in
      if diff >= Sys.int_size
      then Ok (make n) (* new window *)
      else Ok {window = (window lsl diff) lor 1; counter = n}
    end

  let pp ppf {window;counter} =
    Fmt.pf ppf "{ window = %d; counter = %d }" window counter

  let mem {window;counter} n =
    let diff = counter - n in
    if counter < n || diff > Sys.int_size
    then false
    else 0 <> window land (1 lsl diff)
end

hannesm and others added 6 commits April 4, 2020 11:14
Co-Authored-By: C For C's Sake <[email protected]>
Co-Authored-By: C For C's Sake <[email protected]>
Co-Authored-By: C For C's Sake <[email protected]>
Co-Authored-By: C For C's Sake <[email protected]>
Co-Authored-By: C For C's Sake <[email protected]>
@hannesm
Copy link
Contributor Author

hannesm commented Apr 4, 2020

about the replay protection:

  • for message-id / control packets, we do retransmissions (and preserve in State.transport a IM.t (which is a map keyed by Int32)) -- since control packets are ACKed and need to be retransmitted (see expected_packet in Engine), here we only accept the very next ID (i.e. no out-of-order, no holes) -- this is a deficiency which we should overcome (esp. important in UDP mode)
  • for packet-id / data packets, there's no ACK / retransmission. This means we don't need to store the packets sent out, and can use the sliding window implementation you suggested. I'd expect this is only affecting UDP as well (the C OpenVPN implementation is single-threaded, and I'd expect even a multi-threaded implementation to ensure sending packets in-order -- frivpn is multi-threaded, but only use a single thread for sending and ensures in-order).

The underlying question is still: how large should the window be? 3? 5? And given the above <packet-id> <timestamp> vs <packet-id>, what should the delta for the timestamp be? (this is actually related to the control packets, which include a timestamp as well, and there's a todo in the code about what to do with the timestamp)...

@hannesm
Copy link
Contributor Author

hannesm commented Apr 4, 2020

remaining is implementing the replay protection -- and further testing

there's a bit of asymmetry: the I.write is done in a separate Lwt task (to allow that task to wait for ARP timeouts, this is crucial to retain), but the O.write is done in the same task (since it returns immediately (famous last words)) -- but then in ovpn_receive the match is not in Lwt.t, and thus I wrapped it in a Lwt.async :/

anyways, good to be squashed and merged IMHO, the replay protection is important, but can be a separate PR (I feel this is already large enough)

@cfcs
Copy link
Contributor

cfcs commented Apr 4, 2020

Minor suggestions, mostly about documentation and API clarity, otherwise looks good to go for me.

@hannesm hannesm changed the title Static keys Static pre-shared key mode (for client only so far), add a mirage-router unikernel Apr 5, 2020
Copy link
Contributor

@cfcs cfcs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@cfcs cfcs merged commit 7ca36d2 into master Apr 5, 2020
@cfcs cfcs deleted the static-keys branch April 5, 2020 15:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

static key mode
2 participants