The Component Pack uses NodePorts to publish its service. For productive environments, this is considered as bad praxis in most cases since there are a couple of limitations like only one port per service, limited port range and static node ip binding. It's also hard to remember all those long ips, since its not possible to use standard ports like 80/443. Since the ports were hard coded by IBM, it's impossible to install more than one instance.
Don't get me wrong. Docker/Kubernetes is a great platform and I'm happy that connections finally got started using those modern tools, which make the platform interesting from an administrative perspective and also keep the platform future-proofen. But the Component Pack should follow best practices to keep the deployment and application itself flexible.
Commonly, helm charts use ClusterIPs and even let the user choose various parameters to fit his needs and infrastructure. That's one of the biggest benefits of this technology. In the Component Pack, we get a black box from IBM. Please use state of the art concepts to profit from the power of the tools, so that we as a customer can profit from them too and do simple tasks like installing multiple instances from our software.
In my point of view, that's a basic requirement for company usage that IBM should know and consider. We have two productive Component Pack installations where Component Pack usage is planned. For this alone two clusters would be necessary, although there is no technical reason. Kubernetes even offers great tools for this, e.g. using subdomains with an Ingress.