-
Notifications
You must be signed in to change notification settings - Fork 96
fix reading properties without changing old configuration #119
Conversation
@wuciawe We'll follow your approach since, as you well pointed out, would keep backward compatibility. |
private val clientOptions = config.properties //config.properties.filterKeys(_.contains(MongodbConfig.ListMongoClientOptions)) // TODO review this Map. Can't filter keys | ||
private val clientOptions = { | ||
val lowerCaseOptions = MongodbConfig.ListMongoClientOptions.map(_.toLowerCase) | ||
config.properties.filter(kv => kv._1.contains(lowerCaseOptions)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic here is wrong: You are filtering those config
properties which does not contain the whole option list.
I'd try something like:
lowerCaseOptions.flatMap(key => config.properties.get(key).map(key -> _)) toMap
Or, cleaner:
private val clientOptions = {
val lowerCaseOptions = MongodbConfig.ListMongoClientOptions.map(_.toLowerCase).toSet
config.properties.filterKeys(lowerCaseOptions contains _)
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at your commits I see you had already noticed that filterKeys
isn't compatible with a map to be distributed to Spark workers as its return instance is not serializable. In any case, you should apply the logic I proposed:
private val clientOptions = {
val lowerCaseOptions = MongodbConfig.ListMongoClientOptions.map(_.toLowerCase).toSet
config.properties.filter { case (k, _) => lowerCaseOptions contains k }
}
@wuciawe Change as suggested, before committing make sure you pass the project test set, we'll pass them for a second time and if everything is OK we'll merge |
@pfcoperez I merged your suggestion and project tests all pass on local machine. Sorry for wrong logic produced in previous pull request, I'm surprised that the wrong code compiles. Is it necessary to squash all those commits into one this time? |
@wuciawe No worries. I was surprised too, probably a consequence Thanks again. |
it fixes #111, and unlike #112, it doesn't require to change old configuration, it's implemented inspired by the org.apache.spark.sql.execution.datasources.CaseInsensitiveMap in SPARK-SQL.