-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation #986
Comments
Hi, and sorry for the late reply.
... and thank you so much for your interest. Unfortunately the project
is neither actively being developed at this time, nor was it very well
documented when it was developed. We developed it as an operational system,
and as long as we did it did not make sense to spend a lot of time on
documentation, and once our project funding was cut we spent a little time
documenting the high level design, but not to the level of detail you
are looking for.
I may (or may not) take some time over the next few weeks during my
vacation to write up something answering the questions you ask.
It is at this time unknown if we will restart development on this project,
but I know for a fact that many of the ideas that went into the design were
sound, and the code in this github repository did run a fully operational
MVNO (oya.sg) or about eight months from june last year (2019) so it's know
to have been working :-)
I can give you a quick answer on how we consumed the diameter traffic:
When we connected to mobile network operators (MNOs) we did this by
establishing an IPSEC VPN going from the MNO to the google cloud, where we
terminated it using the standard IPSEC infrastructure provided by Google.
We had multiple physical interfaces, and used BGP to ensure that traffic
would get through even if one of the endpoints in either end failed. From
that point we just routed the traffic using ordinary routing rules in
GCP/Kubernetes to an interface that consumed the Diameter traffic. We used
Diameter over TCP, and used BGP instead of SCTP to ensure redundancy. In
practice this proved to be both a simple and robust solution. The fact
that we used GCP's standard IPSEC interface also made integration with
other MNOs into a streamlined process. When the project was terminated we
had done this particular type of interfacing with five different MNOs, and
had a playbook that let us reliably go from first contact to fully
operational network integration in less than three months, with most of
that time spent in contract negotiations. The actual integration typically
took less than a week, debugging included.
Best wishes
Bjørn
…On Thu, Jun 11, 2020 at 5:09 PM ashkank83 ***@***.***> wrote:
Hi All,
Thanks for this very interesting project.
I looked everywhere in the documents to get an understanding of the big
picture but unfortunately couldn't find any. In particular I think it will
be very useful to have documentation briefly explaining:
A. Different components/services and what is each on responsible for
B. A K8S deployment diagram showing the component and flow of requests
between them inside the K8S cluster and from outside. For example, will be
nice to know how Diameter traffic is coming into the cluster (is it via
load balancer, node port, ..? ) and how does is it handled?
Please direct me toward the right direction if I have missed it. Thanks
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#986>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACVRUSNZZB53FLS43GZA5TRWDXUJANCNFSM4N3QBASQ>
.
--
(Rmz)
|
Hi Bjørn, Thanks for the reply. I'm sorry to hear about the project not being active, the idea certainly are very interesting and the timelines you mentioned in your reply regarding having MVNOs up and running are rather impressive. Above seems to be very similar to the ideas that I saw in your project. challenge for me right now is to prove (and find out) Diameter traffic can be handled in K8S via load balancer (or any other services). Anyway. once again thanks a lot for the reply and very interesting project you have shared here. |
On Mon, Jul 6, 2020 at 10:45 AM ashkank83 ***@***.***> wrote:
Hi Bjørn,
Thanks for the reply. I'm sorry to hear about the project not being
active, the idea certainly are very interesting and the timelines you
mentioned in your reply regarding having MVNOs up and running are rather
impressive.
Thank you. We were rather impressed ourselves the last few times we did it
;)
I'm in process of putting together a POC for design of Diameter end point
which should be hosted inside our K8S cluster (to benefit from
orchestration, ...) and is responsible to :
A. Do capability exchange with network elements
B. maintain connection with other peers (i.e. P-GW)
C. converts incoming Diameter traffic into gRPC calls and send them to OCS
or other services inside our K8S
Excellent choice. We also tried converting to gRPC and it worked well,
more details below. The component we had that was called "Prime" that
among other things provided the business logic for the OCS functionality.
The database used for the realtime lookups was neo4j, essentially an
in-memory database with persistent backup :-). Very fast, very flexible,
and let us play with many and varied customer/product/pricing/payer
relationships. We ran our own instance, but Neo4J.com provides a very nice
hosted service in GCP, and we were planning to transition to that as soon
as we could (until it no longer mattered).
Above seems to be very similar to the ideas that I saw in your project.
challenge for me right now is to prove (and find out) Diameter traffic can
be handled in K8S via load balancer (or any other services).
I agree, from your description it sounds pretty much identical.
So what we did was to accept the Diameter traffic in (only Gy, but the
principle can be used for other protocols too), and immediately terminate
it. We ran the Diameter termination thing outside of kubernetes because
of easier IP termination there, and then sent traffic in over a different
protocol. We played with a few different versions of protocols. The
first one was lmax disruptor when we ran everything in the same monolith
which was scary fast and worked well. In the end we dropped lmax and had
a separate diameter terminating module that translated to a different
protocol. I honestly don't remember which one was the latest one: One was
gRPC, and the other was a Queueing service from Google (cloud pub/sub?).
We tried both, and they both worked and had their strong and weak sides.
We did some calculations/tests and found that it would be a long time until
we hit any bottlenecks for diameter so we concentrated our efforts
on other issues. Architecturally terminating diameter at the edge, or in
our case just outside the edge of the K8 cluster was a good choice since
it isolated all of those problems far away from where we did our everyday
development work.
I wonder what happens when K8S decides to kill such Diameter pod or we
want to increase number of pods (or decrease), AFAIK Diameter peers need
direct (p2p) connection and not sure what will happen if this connection is
to a K8s Service IP address rather than the pod itself.
In our case, nothing would have happened, since it didn't run in a pod.
The components that ran the business logic for the Gy interface did run in
kubernetes, but routing to them was dynamic either using cloud endpoints
and gRPC or the queuing interface, so it didn' matter which pod they ran
on. Zero downtime upgrades worked well.
Anyway. once again thanks a lot for the reply and very interesting project
you have shared here.
Thank you. I'd love to hear about your project if and when you get any
traction on it. My private email is [email protected].
You may be interested to know that we also had a fully functional delivery
chain for eSIM integrated in our system (from batch ordering up to and
including installation of profiles via an app) and that the source code
for all of that is in the same repository/github organization. Perhaps eSIM
not your primary concern right now, but if anyone ever whisper "eSIM"
around you, you now have a one more place to look for ideas and inspiration.
Best wishes with your PoC, I know that what you're trying to do is feasible
and a fundamentally good idea. Hope you can keep your funding :-)
Bjørn
…--
(Rmz)
|
Hi All,
Thanks for this very interesting project.
I looked everywhere in the documents to get an understanding of the big picture but unfortunately couldn't find any. In particular I think it will be very useful to have documentation briefly explaining:
A. Different components/services and what is each on responsible for
B. A K8S deployment diagram showing the component and flow of requests between them inside the K8S cluster and from outside. For example, will be nice to know how Diameter traffic is coming into the cluster (is it via load balancer, node port, ..? ) and how does is it handled?
Please direct me toward the right direction if I have missed it. Thanks
The text was updated successfully, but these errors were encountered: