Allocate an arbitrary port or an external load balancer is easy to set in motion, but it comes with unique challenges. Defining many services NodePort creates a tangle of random ports. Defining a Load Balancer service leads many to pay more resources than the desired cloud.

It is not possible to avoid completely, but perhaps it can be reduced, contained, so you only need to allocate a random port or a load balancer to expose a lot of internal services? The platform takes a new layer of abstraction, one that could consolidate many services behind a single entry point. You can know more about from various internet sources.

That's when the API Kubernetes introduces a new kind of manifest, called Ingress, which offers a fresh take on the routing problem. It works like this: you write Ingress manifest stating how you want the client to be forwarded to the service.

Image Source: Google

Ingress control pod, like any other application, so they are part of the cluster and can see the other pods. They are built using a reverse proxy that has been active in the market for many years. So, you have your choice of Ingress HAProxy Controller, an Ingress Nginx Controller, and so forth.

The underlying proxy offers Layer 7 routing and load balancing capabilities. Different proxy carries its own set of features to the table.

Set within its own cluster, Ingress susceptible to the same Controller walled-in prison as other Kubernetes pods. You need to expose them to the outside through the Service with either type of NodePort or load balancer. However, now you have a single entry point which all traffic passes: one connected to the Ingress Service Controller, which, in turn, is connected to many internal pods.