Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

schemaRegistryAuth username and password not supporting env vars #38

Open
3 of 4 tasks
Antoine7773 opened this issue Jan 29, 2025 · 3 comments
Open
3 of 4 tasks
Assignees
Labels
status/triage/completed Automatic triage completed status/triage/manual Manual triage in progress type/bug Something isn't working

Comments

@Antoine7773
Copy link

Antoine7773 commented Jan 29, 2025

Issue submitter TODO list

  • I've looked up my issue in FAQ
  • I've searched for an already existing issues here
  • I've tried running main-labeled docker image and the issue still persists there
  • I'm running a supported version of the application which is listed here

Describe the bug (actual behavior)

Would like the UI to connect to schema registry where BasicAuth is enabled.
Here is the configuration used :

yamlApplicationConfig:
  kafka:
    clusters:
      - name: pre-prod
        bootstrapServers: "SASL_SSL://server_01g:9092,SASL_SSL://server_01s:9092,SASL_SSL://server_02g:9092,SASL_SSL://server_02s:9092"
        schemaRegistry: "http://server_01g:8081,http://server_01s:8081,http://server_02g:8081,http://server_02s:8081"
        schemaRegistryAuth:
          username: ${SCHEMA_USER_USERNAME}
          password: ${SCHEMA_USER_PASSWORD}
        properties:
          sasl:
            jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username=${KAFKAUI_USERNAME} password=${KAFKAUI_PASSWORD};
            mechanism: SCRAM-SHA-256
          security.protocol: SASL_SSL
          ssl.truststore.location: /usr/lib/jvm/java-21-zulu21/lib/security/cacerts
          ssl.truststore.password: ${TRUSTSTORE_PASSWORD}
          ssl.keystore.location: /etc/ssl/kafka.jks
          ssl.keystore.password: ${KEYSTORE_PASSWORD}

All reference to environment variables works well (ie : ${KAFKAUI_USERNAME} ${KAFKAUI_PASSWORD} ${TRUSTSTORE_PASSWORD} ${KEYSTORE_PASSWORD}) except those two : ${SCHEMA_USER_USERNAME} ${SCHEMA_USER_PASSWORD}

When putting clear text values like this :

[...]
        schemaRegistryAuth:
          username: test_user
          password: test_password
[...]

It works well.

When entering the pod in which kafka-ui is deployed env variables are correct and match with the ones set in clear text.

This result in a 401 error when trying to check schema registry from the ui :

Standard Commons Logging discovery in action with spring-jcl: please remove commons-logging.jar from classpath in order to avoid potential conflicts
 _   _ ___    __             _                _          _  __      __ _
| | | |_ _|  / _|___ _ _    /_\  _ __ __ _ __| |_  ___  | |/ /__ _ / _| |_____
| |_| || |  |  _/ _ | '_|  / _ \| '_ / _` / _| ' \/ -_) | ' </ _` |  _| / / _`|
 \___/|___| |_| \___|_|   /_/ \_| .__\__,_\__|_||_\___| |_|\_\__,_|_| |_\_\__,|
                                 |_|
2025-01-29 13:15:41,112 INFO  [main] i.k.u.KafkaUiApplication: Starting KafkaUiApplication using Java 21.0.5 with PID 1 (/api.jar started by kafkaui in /)
2025-01-29 13:15:41,114 DEBUG [main] i.k.u.KafkaUiApplication: Running with Spring Boot v3.4.1, Spring v6.2.1
2025-01-29 13:15:41,114 INFO  [main] i.k.u.KafkaUiApplication: No active profile set, falling back to 1 default profile: "default"
2025-01-29 13:15:44,431 DEBUG [main] i.k.u.s.SerdesInitializer: Configuring serdes for cluster pre-prod
2025-01-29 13:15:44,439 INFO  [main] i.c.k.s.KafkaAvroDeserializerConfig: KafkaAvroDeserializerConfig values:
	auto.register.schemas = true
	avro.reflection.allow.null = false
	avro.use.logical.type.converters = true
	basic.auth.credentials.source = URL
	basic.auth.user.info = [hidden]
	bearer.auth.cache.expiry.buffer.seconds = 300
	bearer.auth.client.id = null
	bearer.auth.client.secret = null
	bearer.auth.credentials.source = STATIC_TOKEN
	bearer.auth.custom.provider.class = null
	bearer.auth.identity.pool.id = null
	bearer.auth.issuer.endpoint.url = null
	bearer.auth.logical.cluster = null
	bearer.auth.scope = null
	bearer.auth.scope.claim.name = scope
	bearer.auth.sub.claim.name = sub
	bearer.auth.token = [hidden]
	context.name.strategy = class io.confluent.kafka.serializers.context.NulontextNameStrategy
	http.connect.timeout.ms = 60000
	http.read.timeout.ms = 60000
	id.compatibility.strict = true
	key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
	latest.cache.size = 1000
	latest.cache.ttl.sec = -1
	latest.compatibility.strict = true
	max.schemas.per.subject = 1000
	normalize.schemas = false
	propagate.schema.tags = false
	proxy.host =
	proxy.port = -1
	rule.actions = []
	rule.executors = []
	rule.service.loader.enable = true
	schema.format = null
	schema.reflection = false
	schema.registry.basic.auth.user.info = [hidden]
	schema.registry.ssl.cipher.suites = null
	schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
	schema.registry.ssl.endpoint.identification.algorithm = https
	schema.registry.ssl.engine.factory.class = null
	schema.registry.ssl.key.password = null
	schema.registry.ssl.keymanager.algorithm = SunX509
	schema.registry.ssl.keystore.certificate.chain = null
	schema.registry.ssl.keystore.key = null
	schema.registry.ssl.keystore.location = null
	schema.registry.ssl.keystore.password = null
	schema.registry.ssl.keystore.type = JKS
	schema.registry.ssl.protocol = TLSv1.3
	schema.registry.ssl.provider = null
	schema.registry.ssl.secure.random.implementation = null
	schema.registry.ssl.trustmanager.algorithm = PKIX
	schema.registry.ssl.truststore.certificates = null
	schema.registry.ssl.truststore.location = null
	schema.registry.ssl.truststore.password = null
	schema.registry.ssl.truststore.type = JKS
	schema.registry.url = [wontbeused]
	specific.avro.key.type = null
	specific.avro.reader = false
	specific.avro.value.type = null
	use.latest.version = false
	use.latest.with.metadata = null
	use.schema.id = -1
	value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
2025-01-29 13:15:44,440 INFO  [main] i.c.k.s.KafkaAvroDeserializerConfig: KafkaAvroDeserializerConfig values:
	auto.register.schemas = true
	avro.reflection.allow.null = false
	avro.use.logical.type.converters = true
	basic.auth.credentials.source = URL
	basic.auth.user.info = [hidden]
	bearer.auth.cache.expiry.buffer.seconds = 300
	bearer.auth.client.id = null
	bearer.auth.client.secret = null
	bearer.auth.credentials.source = STATIC_TOKEN
	bearer.auth.custom.provider.class = null
	bearer.auth.identity.pool.id = null
	bearer.auth.issuer.endpoint.url = null
	bearer.auth.logical.cluster = null
	bearer.auth.scope = null
	bearer.auth.scope.claim.name = scope
	bearer.auth.sub.claim.name = sub
	bearer.auth.token = [hidden]
	context.name.strategy = class io.confluent.kafka.serializers.context.NulontextNameStrategy
	http.connect.timeout.ms = 60000
	http.read.timeout.ms = 60000
	id.compatibility.strict = true
	key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
	latest.cache.size = 1000
	latest.cache.ttl.sec = -1
	latest.compatibility.strict = true
	max.schemas.per.subject = 1000
	normalize.schemas = false
	propagate.schema.tags = false
	proxy.host =
	proxy.port = -1
	rule.actions = []
	rule.executors = []
	rule.service.loader.enable = true
	schema.format = null
	schema.reflection = false
	schema.registry.basic.auth.user.info = [hidden]
	schema.registry.ssl.cipher.suites = null
	schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
	schema.registry.ssl.endpoint.identification.algorithm = https
	schema.registry.ssl.engine.factory.class = null
	schema.registry.ssl.key.password = null
	schema.registry.ssl.keymanager.algorithm = SunX509
	schema.registry.ssl.keystore.certificate.chain = null
	schema.registry.ssl.keystore.key = null
	schema.registry.ssl.keystore.location = null
	schema.registry.ssl.keystore.password = null
	schema.registry.ssl.keystore.type = JKS
	schema.registry.ssl.protocol = TLSv1.3
	schema.registry.ssl.provider = null
	schema.registry.ssl.secure.random.implementation = null
	schema.registry.ssl.trustmanager.algorithm = PKIX
	schema.registry.ssl.truststore.certificates = null
	schema.registry.ssl.truststore.location = null
	schema.registry.ssl.truststore.password = null
	schema.registry.ssl.truststore.type = JKS
	schema.registry.url = [wontbeused]
	specific.avro.key.type = null
	specific.avro.reader = false
	specific.avro.value.type = null
	use.latest.version = false
	use.latest.with.metadata = null
	use.schema.id = -1
	value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
2025-01-29 13:15:44,866 INFO  [main] o.s.b.a.e.w.EndpointLinksResolver: Exposing 3 endpoints beneath base path '/actuator'
2025-01-29 13:15:44,966 INFO  [main] i.k.u.c.a.OAuthSecurityConfig: Configuring OAUTH2 authentication.
2025-01-29 13:15:45,186 INFO  [main] o.s.b.f.a.AutowiredAnnotationBeanPostProcessor: Inconsistent constructor declaration on bean with name '_reactiveMethodSecurityConfiguration': single autowire-marked constructor flagged as optional - this constructor is effectively required since there is no default constructor to fall back to: org.springframework.security.config.annotation.method.configuration.ReactiveAuthorizationManagerMethodSecurityConfiguration(org.springframework.security.access.expression.method.MethodSecurityExpressionHandler,org.springframework.beans.factory.ObjectProvider,org.springframework.beans.factory.ObjectProvider)
2025-01-29 13:15:45,533 INFO  [main] o.s.b.w.e.n.NettyWebServer: Netty started on port 8080 (http)
2025-01-29 13:15:45,547 INFO  [main] i.k.u.KafkaUiApplication: Started KafkaUiApplication in 5.111 seconds (process running for 5.67)
2025-01-29 13:15:45,703 DEBUG [parallel-3] i.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: pre-prod
2025-01-29 13:15:45,713 INFO  [parallel-3] o.a.k.c.a.AdminClientConfig: AdminClientConfig values:
	auto.include.jmx.reporter = true
	bootstrap.controllers = []
	bootstrap.servers = [SASL_SSL://server_01g:9092, SASL_SSL://server_01s:9092, SASL_SSL://server_02g:9092, SASL_SSL://server_02s:9092]
	client.dns.lookup = use_all_dns_ips
	client.id = kafbat-ui-admin-1738156545-1
	connections.max.idle.ms = 300000
	default.api.timeout.ms = 60000
	enable.metrics.push = true
	metadata.max.age.ms = 300000
	metadata.recovery.strategy = none
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 2147483647
	retry.backoff.max.ms = 1000
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = [hidden]
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.connect.timeout.ms = null
	sasl.login.read.timeout.ms = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.login.retry.backoff.max.ms = 10000
	sasl.login.retry.backoff.ms = 100
	sasl.mechanism = SCRAM-SHA-256
	sasl.oauthbearer.clock.skew.seconds = 30
	sasl.oauthbearer.expected.audience = null
	sasl.oauthbearer.expected.issuer = null
	sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
	sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
	sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
	sasl.oauthbearer.jwks.endpoint.url = null
	sasl.oauthbearer.scope.claim.name = scope
	sasl.oauthbearer.sub.claim.name = sub
	sasl.oauthbearer.token.endpoint.url = null
	security.protocol = SASL_SSL
	security.providers = null
	send.buffer.bytes = 131072
	socket.connection.setup.timeout.max.ms = 30000
	socket.connection.setup.timeout.ms = 10000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
	ssl.endpoint.identification.algorithm = https
	ssl.engine.factory.class = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.certificate.chain = null
	ssl.keystore.key = null
	ssl.keystore.location = /etc/ssl/kafka.jks
	ssl.keystore.password = [hidden]
	ssl.keystore.type = JKS
	ssl.protocol = TLSv1.3
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.certificates = null
	ssl.truststore.location = /usr/lib/jvm/java-21-zulu21/lib/security/cacerts
	ssl.truststore.password = [hidden]
	ssl.truststore.type = JKS
2025-01-29 13:15:45,795 INFO  [parallel-2] o.a.k.c.u.AppInfoParser: Kafka version: 7.8.0-ccs
2025-01-29 13:15:45,795 INFO  [parallel-2] o.a.k.c.u.AppInfoParser: Kafka commitId: cc7168da1fddfcfd
2025-01-29 13:15:45,795 INFO  [parallel-2] o.a.k.c.u.AppInfoParser: Kafka startTimeMs: 1738156545783
2025-01-29 13:15:45,846 INFO  [parallel-3] o.a.k.c.s.a.AbstractLogin: Successfully logged in.
2025-01-29 13:15:45,858 INFO  [parallel-3] o.a.k.c.u.AppInfoParser: Kafka version: 7.8.0-ccs
2025-01-29 13:15:45,858 INFO  [parallel-3] o.a.k.c.u.AppInfoParser: Kafka commitId: cc7168da1fddfcfd
2025-01-29 13:15:45,858 INFO  [parallel-3] o.a.k.c.u.AppInfoParser: Kafka startTimeMs: 1738156545858
2025-01-29 13:15:46,197 DEBUG [parallel-71] i.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster: pre-prod
2025-01-29 13:16:15,547 DEBUG [parallel-1] i.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: pre-prod
2025-01-29 13:16:15,554 DEBUG [parallel-11] i.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster: pre-prod
2025-01-29 13:16:45,547 DEBUG [parallel-65] i.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: pre-prod
2025-01-29 13:16:45,559 DEBUG [parallel-13] i.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster: pre-prod
2025-01-29 13:17:15,547 DEBUG [parallel-57] i.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: pre-prod
2025-01-29 13:17:15,554 DEBUG [parallel-68] i.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster: pre-prod
2025-01-29 13:17:32,163 DEBUG [reactor-http-epoll-9] i.k.u.s.r.e.OauthAuthorityExtractor: Principal name is: [***]
2025-01-29 13:17:32,164 DEBUG [reactor-http-epoll-9] i.k.u.s.r.e.OauthAuthorityExtractor: Matched roles by username: []
2025-01-29 13:17:32,164 DEBUG [reactor-http-epoll-9] i.k.u.s.r.e.OauthAuthorityExtractor: Token's groups: [***]
2025-01-29 13:17:32,164 DEBUG [reactor-http-epoll-9] i.k.u.s.r.e.OauthAuthorityExtractor: Matched group roles: [admin]
2025-01-29 13:17:35,418 ERROR [reactor-http-epoll-12] o.s.b.a.w.r.e.AbstractErrorWebExceptionHandler: [0f57a407-52]  500 Server Error for HTTP GET "/api/clusters/pre-prod/schemas?page=1&perPage=25"
org.springframework.web.reactive.function.client.WebClientResponseException$Unauthorized: 401 Unauthorized from GET http://server_01g:8081/subjects
	at org.springframework.web.reactive.function.client.WebClientResponseException.create(WebClientResponseException.java:322)
	Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
	*__checkpoint ⇢ 401 UNAUTHORIZED from GET http://server_01g:8081/subjects [DefaultWebClient]
	*__checkpoint ⇢ Handler io.kafbat.ui.controller.SchemasController#getSchemas(String, Integer, Integer, String, ServerWebExchange) [DispatcherHandler]
	*__checkpoint ⇢ io.kafbat.ui.config.CorsGlobaonfiguration$$Lambda/0x00007f972f7574b8 [DefaultWebFilterChain]
	*__checkpoint ⇢ io.kafbat.ui.config.ReadOnlyModeFilter [DefaultWebFilterChain]
	*__checkpoint ⇢ io.kafbat.ui.config.CustomWebFilter [DefaultWebFilterChain]
	*__checkpoint ⇢ AuthorizationWebFilter [DefaultWebFilterChain]
	*__checkpoint ⇢ ExceptionTranslationWebFilter [DefaultWebFilterChain]
	*__checkpoint ⇢ LogoutWebFilter [DefaultWebFilterChain]
	*__checkpoint ⇢ ServerRequestCacheWebFilter [DefaultWebFilterChain]
	*__checkpoint ⇢ SecurityContextServerWebExchangeWebFilter [DefaultWebFilterChain]
	*__checkpoint ⇢ LogoutPageGeneratingWebFilter [DefaultWebFilterChain]
	*__checkpoint ⇢ LoginPageGeneratingWebFilter [DefaultWebFilterChain]
	*__checkpoint ⇢ StaticFileWebFilter [DefaultWebFilterChain]
	*__checkpoint ⇢ DefaultResourcesWebFilter [DefaultWebFilterChain]
	*__checkpoint ⇢ OAuth2LoginAuthenticationWebFilter [DefaultWebFilterChain]
	*__checkpoint ⇢ OAuth2AuthorizationRequestRedirectWebFilter [DefaultWebFilterChain]
	*__checkpoint ⇢ ReactorContextWebFilter [DefaultWebFilterChain]
	*__checkpoint ⇢ HttpHeaderWriterWebFilter [DefaultWebFilterChain]
	*__checkpoint ⇢ ServerWebExchangeReactorContextWebFilter [DefaultWebFilterChain]
	*__checkpoint ⇢ org.springframework.security.web.server.WebFilterChainProxy [DefaultWebFilterChain]
	*__checkpoint ⇢ HTTP GET "/api/clusters/pre-prod/schemas?page=1&perPage=25" [ExceptionHandlingWebHandler]
Original Stack Trace:
		at org.springframework.web.reactive.function.client.WebClientResponseException.create(WebClientResponseException.java:322)
		at org.springframework.web.reactive.function.client.DefaultClientResponse.lambda$createException$1(DefaultClientResponse.java:214)
		at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:106)
		at reactor.core.publisher.FluxOnErrorReturn$ReturnSubscriber.onNext(FluxOnErrorReturn.java:162)
		at reactor.core.publisher.FluxDefaultIfEmpty$DefaultIfEmptySubscriber.onNext(FluxDefaultIfEmpty.java:122)
		at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:129)
		at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107)
		at reactor.core.publisher.FluxMapFuseable$MapFuseableConditionalSubscriber.onNext(FluxMapFuseable.java:299)
		at reactor.core.publisher.FluxFilterFuseable$FilterFuseableConditionalSubscriber.onNext(FluxFilterFuseable.java:337)
		at reactor.core.publisher.Operators$BaseFluxToMonoOperator.completePossiblyEmpty(Operators.java:2097)
		at reactor.core.publisher.MonoCollect$CollectSubscriber.onComplete(MonoCollect.java:145)
		at reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:144)
		at reactor.core.publisher.FluxPeek$PeekSubscriber.onComplete(FluxPeek.java:260)
		at reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:144)
		at reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:413)
		at reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:455)
		at reactor.netty.channel.ChannelOperations.terminate(ChannelOperations.java:509)
		at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:817)
		at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:115)
		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
		at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
		at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346)
		at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318)
		at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)
		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
		at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1357)
		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
		at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:868)
		at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:799)
		at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:501)
		at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:399)
		at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
		at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
		at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
		at java.base/java.lang.Thread.run(Thread.java:1583)

Seems that the ui is using ${SCHEMA_USER_USERNAME} ${SCHEMA_USER_PASSWORD} as username and password an not the values which they reference as env var.
Could there be an issue regarding yaml expansion regarding these 2 vars ?

Expected behavior

Should not face 401 errors when using env vars regarding username and password to connect to schema registry

Your installation details

  1. kafbat/kafka-ui@4cf17a0
apiVersion: v2
name: kafka-ui
description: A Helm chart for kafka-UI
type: application
version: 1.4.10
appVersion: v1.1.0
icon: https://raw.githubusercontent.com/kafbat/kafka-ui/main/documentation/images/logo_new.png
yamlApplicationConfig:
  kafka:
    clusters:
      - name: pre-prod
        bootstrapServers: "SASL_SSL://server_01g:9092,SASL_SSL://server_01s:9092,SASL_SSL://server_02g:9092,SASL_SSL://server_02s:9092"
        schemaRegistry: "http://server_01g:8081,http://server_01s:8081,http://server_02g:8081,http://server_02s:8081"
        schemaRegistryAuth:
          username: ${SCHEMA_USER_USERNAME}
          password: ${SCHEMA_USER_PASSWORD}
        properties:
          sasl:
            jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username=${KAFKAUI_USERNAME} password=${KAFKAUI_PASSWORD};
            mechanism: SCRAM-SHA-256
          security.protocol: SASL_SSL
          ssl.truststore.location: /usr/lib/jvm/java-21-zulu21/lib/security/cacerts
          ssl.truststore.password: ${TRUSTSTORE_PASSWORD}
          ssl.keystore.location: /etc/ssl/kafka.jks
          ssl.keystore.password: ${KEYSTORE_PASSWORD}

Steps to reproduce

  1. Have shema registry where basic auth is enabled
  2. Pass schemaRegistryAuth username and password has env variable and try to check Schema Registry

Screenshots

No response

Logs

No response

Additional context

No response

@Antoine7773 Antoine7773 added status/triage Issues pending maintainers triage type/bug Something isn't working labels Jan 29, 2025
@kapybro kapybro bot added status/triage/manual Manual triage in progress status/triage/completed Automatic triage completed and removed status/triage Issues pending maintainers triage labels Jan 29, 2025
Copy link

Hi Antoine7773! 👋

Welcome, and thank you for opening your first issue in the repo!

Please wait for triaging by our maintainers.

As development is carried out in our spare time, you can support us by sponsoring our activities or even funding the development of specific issues.
Sponsorship link

If you plan to raise a PR for this issue, please take a look at our contributing guide.

@Haarolean
Copy link
Member

works for me without helm charts

kafka:
  clusters:
    - name: local
      bootstrapServers: localhost:9092
      schemaRegistry: http://localhost:8085
      schemaRegistryAuth:
        username: ${TEST_USER}
        password: ${TEST_PASS}
2025-01-29 23:22:40,823 INFO  [restartedMain] i.k.u.s.KafkaClusterFactory: sr: yolo, kekw

Either an issue with the charts (idk how could it affect env vars like this), or a misconfiguration on your side. Moving to the charts repo.

@Haarolean Haarolean transferred this issue from kafbat/kafka-ui Jan 29, 2025
@Haarolean
Copy link
Member

@azatsafin any ideas?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status/triage/completed Automatic triage completed status/triage/manual Manual triage in progress type/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants