-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update dependency mlflow to v2 #16
Open
renovate
wants to merge
1
commit into
master
Choose a base branch
from
renovate/mlflow-2.x
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
adec638
to
8e701ed
Compare
8e701ed
to
d2ffa50
Compare
00f8a40
to
65abd6a
Compare
65abd6a
to
e1593b4
Compare
e1593b4
to
80b738d
Compare
80b738d
to
fde1d44
Compare
da04823
to
337b394
Compare
337b394
to
8b4e230
Compare
8b4e230
to
6d0116e
Compare
636d33f
to
d1a6e31
Compare
d1a6e31
to
d6186ad
Compare
d6186ad
to
0ec424d
Compare
6472199
to
e912e85
Compare
d7b04d8
to
14296a9
Compare
da1a767
to
d7f969d
Compare
d7f969d
to
cd5a30b
Compare
cd5a30b
to
8b9b243
Compare
8b9b243
to
aa7ec63
Compare
aa7ec63
to
6f28d23
Compare
6f28d23
to
86fbc57
Compare
7617f75
to
f8ddc4e
Compare
f8ddc4e
to
52630ea
Compare
52630ea
to
3e666d8
Compare
3e666d8
to
a2c89c5
Compare
a2c89c5
to
5f882aa
Compare
5f882aa
to
113a523
Compare
3302fff
to
d9ff1ae
Compare
d9ff1ae
to
3931b7f
Compare
8ae7497
to
8a5245f
Compare
8a5245f
to
00e8513
Compare
00e8513
to
69b3f40
Compare
69b3f40
to
791a84b
Compare
791a84b
to
dc3fee5
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
==1.20.0
->==2.20.1
Release Notes
mlflow/mlflow (mlflow)
v2.20.1
Compare Source
MLflow 2.20.1 is a patch release includes several bug fixes and features:
Features:
Bug fixes:
Other small updates:
#14337, #14382, @B-Step62; #14356, @daniellok-db, #14354, @artjen, #14360, @TomuHirata,
v2.20.0
Compare Source
We are excited to announce the release of MLflow 2.20.0! This release includes a number of significant features, enhancements, and bug fixes.
Major New Features
💡Type Hint-Based Model Signature: Define your model's signature in the most Pythonic way. MLflow now supports defining a model signature based on the type hints in your
PythonModel
'spredict
function, and validating input data payloads against it. (#14182, #14168, #14130, #14100, #14099, @serena-ruan)🧠 Bedrock / Groq Tracing Support: MLflow Tracing now offers a one-line auto-tracing experience for Amazon Bedrock and Groq LLMs. Track LLM invocation within your model by simply adding
mlflow.bedrock.tracing
ormlflow.groq.tracing
call to the code. (#14018, @B-Step62, #14006, @anumita0203)🗒️ Inline Trace Rendering in Jupyter Notebook: MLflow now supports rendering a trace UI within the notebook where you are running models. This eliminates the need to frequently switch between the notebook and browser, creating a seamless local model debugging experience. Check out this blog post for a quick demo! (#13955, @daniellok-db)
⚡️Faster Model Validation with
uv
Package Manager: MLflow has adopted uv, a new Rust-based, super-fast Python package manager. This release adds support for the new package manager in the mlflow.models.predict API, enabling faster model environment validation. Stay tuned for more updates! (#13824, @serena-ruan)🖥️ New Chat Panel in Trace UI: THe MLflow Trace UI now shows a unified
chat
panel for LLM invocations. The update allows you to view chat messages and function calls in a rich and consistent UI across LLM providers, as well as inspect the raw input and output payloads. (#14211, @TomuHirata)Other Features:
ChatAgent
base class for defining custom python agent (#13797, @bbqiu)context
parameter optional for callingPythonModel
instance (#14059, @serena-ruan)ChatModel
(#14068, @stevenchen-db)Bug fixes:
log_image
(#14281, @TomeHirata)0584bdc
(#14146, @daniellok-db)loaded_model
variable (#14109, @yang-chengg)DatabricksSDKModelsArtifactRepository.list_artifacts
is called on a file (#14027, @shichengzhou-db)Documentation updates:
Small bug fixes and documentation updates:
#14294, #14252, #14233, #14205, #14217, #14172, #14188, #14167, #14166, #14163, #14162, #14161, #13971, @TomeHirata; #14299, #14280, #14279, #14278, #14272, #14270, #14268, #14269, #14263, #14258, #14222, #14248, #14128, #14112, #14111, #14093, #14096, #14095, #14090, #14089, #14085, #14078, #14074, #14070, #14053, #14060, #14035, #14014, #14002, #14000, #13997, #13996, #13995, @harupy; #14298, #14286, #14249, #14276, #14259, #14242, #14254, #14232, #14207, #14206, #14185, #14196, #14193, #14173, #14164, #14159, #14165, #14152, #14151, #14126, #14069, #13987, @B-Step62; #14295, #14265, #14271, #14262, #14235, #14239, #14234, #14228, #14227, #14229, #14218, #14216, #14213, #14208, #14204, #14198, #14187, #14181, #14177, #14176, #14156, #14169, #14099, #14086, #13983, @serena-ruan; #14155, #14067, #14140, #14132, #14072, @daniellok-db; #14178, @emmanuel-ferdman; #14247, @dbczumar; #13789, #14108, @dsuhinin; #14212, @aravind-segu; #14223, #14191, #14084, @dsmilkov; #13804, @kriscon-db; #14158, @Lodewic; #14148, #14147, #14115, #14079, #14116, @WeichenXu123; #14135, @brilee; #14133, @manos02; #14121, @LeahKorol; #14025, @nojaf; #13948, @benglewis; #13942, @justsomerandomdude264; #14003, @Ajay-Satish-01; #13982, @prithvikannan; #13638, @MaxwellSalmon
v2.19.0
Compare Source
We are excited to announce the release of MLflow 2.19.0! This release includes a number of significant features, enhancements, and bug fixes.
Major New Features
ChatModel enhancements - ChatModel now adopts
ChatCompletionRequest
andChatCompletionResponse
as its new schema. Thepredict_stream
interface usesChatCompletionChunk
to deliver true streaming responses. Additionally, thecustom_inputs
andcustom_outputs
fields in ChatModel now utilizeAnyType
, enabling support for a wider variety of data types. Note: In a future version of MLflow,ChatParams
(and by extension,ChatCompletionRequest
) will have the default values forn
,temperature
, andstream
removed. (#13782, #13857, @stevenchen-db)Tracing improvements - MLflow Tracing now supports both automatic and manual tracing for DSPy, LlamaIndex and Langchain flavors. Tracing feature is also auto-enabled for mlflow evaluation for all supported flavors. (#13790, #13793, #13795, #13897, @B-Step62)
New Tracing Integrations - MLflow Tracing now supports CrewAI and Anthropic, enabling a one-line, fully automated tracing experience. (#13903, @TomeHirata, #13851, @gabrielfu)
Any Type in model signature - MLflow now supports AnyType in model signature. It can be used to host any data types that were not supported before. (#13766, @serena-ruan)
Other Features:
update_current_trace
API for adding tags to an active trace. (#13828, @B-Step62)trace.search_spans()
method for searching spans within traces (#13984, @B-Step62)Bug fixes:
mlflow.end_run
inside a MLflow run context manager (#13888, @WeichenXu123)Documentation updates:
Small bug fixes and documentation updates:
#13972, #13968, #13917, #13912, #13906, #13846, @serena-ruan; #13969, #13959, #13957, #13958, #13925, #13882, #13879, #13881, #13869, #13870, #13868, #13854, #13849, #13847, #13836, #13823, #13811, #13820, #13775, #13768, #13764, @harupy; #13960, #13914, #13862, #13892, #13916, #13918, #13915, #13878, #13891, #13863, #13859, #13850, #13844, #13835, #13818, #13762, @B-Step62; #13913, #13848, #13774, @TomeHirata; #13936, #13954, #13883, @daniellok-db; #13947, @AHB102; #13929, #13922, @Ajay-Satish-01; #13857, @stevenchen-db; #13773, @BenWilson2; #13705, @williamjamir; #13745, #13743, @WeichenXu123; #13895, @chenmoneygithub; #14023, @theBeginner86
v2.18.0
Compare Source
We are excited to announce the release of MLflow 2.18.0! This release includes a number of significant features, enhancements, and bug fixes.
Python Version Update
Python 3.8 is now at an end-of-life point. With official support being dropped for this legacy version, MLflow now requires Python 3.9
as a minimum supported version.
Major New Features
🦺 Fluent API Thread/Process Safety - MLflow's fluent APIs for tracking and the model registry have been overhauled to add support for both thread and multi-process safety. You are now no longer forced to use the Client APIs for managing experiments, runs, and logging from within multiprocessing and threaded applications. (#13456, #13419, @WeichenXu123)
🧩 DSPy flavor - MLflow now supports logging, loading, and tracing of
DSPy
models, broadening the support for advanced GenAI authoring within MLflow. Check out the MLflow DSPy Flavor documentation to get started! (#13131, #13279, #13369, #13345, @chenmoneygithub, #13543, #13800, #13807, @B-Step62, #13289, @michael-berk)🖥️ Enhanced Trace UI - MLflow Tracing's UI has undergone
a significant overhaul to bring usability and quality of life updates to the experience of auditing and investigating the contents of GenAI traces, from enhanced span content rendering using markdown to a standardized span component structure, (#13685, #13357, #13242, @daniellok-db)
🚄 New Tracing Integrations - MLflow Tracing now supports DSPy, LiteLLM, and Google Gemini, enabling a one-line, fully automated tracing experience. These integrations unlock enhanced observability across a broader range of industry tools. Stay tuned for upcoming integrations and updates! (#13801, @TomeHirata, #13585, @B-Step62)
📊 Expanded LLM-as-a-Judge Support - MLflow now enhances its evaluation capabilities with support for additional providers, including
Anthropic
,Bedrock
,Mistral
, andTogetherAI
, alongside existing providers likeOpenAI
. Users can now also configure proxy endpoints or self-hosted LLMs that follow the provider API specs by using the newproxy_url
andextra_headers
options. Visit the LLM-as-a-Judge documentation for more details! (#13715, #13717, @B-Step62)⏰ Environment Variable Detection - As a helpful reminder for when you are deploying models, MLflow now detects and reminds users of environment variables set during model logging, ensuring they are configured for deployment. In addition to this, the
mlflow.models.predict
utility has also been updated to include these variables in serving simulations, improving pre-deployment validation. (#13584, @serena-ruan)Breaking Changes to ChatModel Interface
ChatModel Interface Updates - As part of a broader unification effort within MLflow and services that rely on or deeply integrate
with MLflow's GenAI features, we are working on a phased approach to making a consistent and standard interface for custom GenAI
application development and usage. In the first phase (planned for release in the next few releases of MLflow), we are marking
several interfaces as deprecated, as they will be changing. These changes will be:
ChatRequest
→ChatCompletionRequest
to provide disambiguation for future planned request interfaces.ChatResponse
→ChatCompletionResponse
for the same reason as the input interface.metadata
fields withinChatRequest
andChatResponse
→custom_inputs
andcustom_outputs
, respectively.predict_stream
will be updated to enable true streaming for custom GenAI applications. Currently, it returns a generator with synchronous outputs from predict. In a future release, it will return a generator ofChatCompletionChunks
, enabling asynchronous streaming. While the API call structure will remain the same, the returned data payload will change significantly, aligning with LangChain’s implementation.mlflow.models.rag_signatures
will be deprecated, merging into unifiedChatCompletionRequest
,ChatCompletionResponse
, andChatCompletionChunks
.Other Features:
spark_udf
when running on Databricks Serverless runtime, Databricks connect, and prebuilt python environments (#13276, #13496, @WeichenXu123)model_config
parameter forpyfunc.spark_udf
for customization of batch inference payload submission (#13517, @WeichenXu123)Document
s (#13242, @daniellok-db)resources
definitions forLangchain
model logging (#13315, @sunishsheth2009)dependencies
for Agent definitions (#13246, @sunishsheth2009)Bug fixes:
gc
command when deleting experiments with logged datasets (#13741, @daniellok-db)Langchain
'spyfunc
predict input conversion (#13652, @serena-ruan)Optional
dataclasses that define a model's signature (#13440, @bbqiu)LangChain
's autologging thread-safety behavior (#13672, @B-Step62)role
andindex
as required for chat schema (#13279, @chenmoneygithub)Langchain
models (#13610, @WeichenXu123)Documentation updates:
model_config
when logging models as code (#13631, @sunishsheth2009)code_paths
model logging feature (#13702, @TomeHirata)SparkML
log_model
documentation with guidance on how return probabilities from classification models (#13684, @WeichenXu123)Small bug fixes and documentation updates:
#13775, #13768, #13764, #13744, #13699, #13742, #13703, #13669, #13682, #13569, #13563, #13562, #13539, #13537, #13533, #13408, #13295, @serena-ruan; #13768, #13764, #13761, #13738, #13737, #13735, #13734, #13723, #13726, #13662, #13692, #13689, #13688, #13680, #13674, #13666, #13661, #13625, #13460, #13626, #13546, #13621, #13623, #13603, #13617, #13614, #13606, #13600, #13583, #13601, #13602, #13604, #13598, #13596, #13597, #13531, #13594, #13589, #13581, #13112, #13587, #13582, #13579, #13578, #13545, #13572, #13571, #13564, #13559, #13565, #13558, #13541, #13560, #13556, #13534, #13386, #13532, #13385, #13384, #13383, #13507, #13523, #13518, #13492, #13493, #13487, #13490, #13488, #13449, #13471, #13417, #13445, #13430, #13448, #13443, #13429, #13418, #13412, #13382, #13402, #13381, #13364, #13356, #13309, #13313, #13334, #13331, #13273, #13322, #13319, #13308, #13302, #13268, #13298, #13296, @harupy; #13705, @williamjamir; #13632, @shichengzhou-db; #13755, #13712, #13260, @BenWilson2; #13745, #13743, #13697, #13548, #13549, #13577, #13349, #13351, #13350, #13342, #13341, @WeichenXu123; #13807, #13798, #13787, #13786, #13762, #13749, #13733, #13678, #13721, #13611, #13528, #13444, #13450, #13360, #13416, #13415, #13336, #13305, #13271, @B-Step62; #13808, #13708, @smurching; #13739, @fedorkobak; #13728, #13719, #13695, #13677, @TomeHirata; #13776, #13736, #13649, #13285, #13292, #13282, #13283, #13267, @daniellok-db; #13711, @bhavya2109sharma; #13693, #13658, @aravind-segu; #13553, @dsuhinin; #13663, @gitlijian; #13657, #13629, @parag-shendye; #13630, @JohannesJungbluth; #13613, @itepifanio; #13480, @agjendem; #13627, @ilyaresh; #13592, #13410, #13358, #13233, @nojaf; #13660, #13505, @sunishsheth2009; #13414, @lmoros-DB; #13399, @Abubakar17; #13390, @KekmaTime; #13291, @michael-berk; #12511, @jgiannuzzi; #13265, @Ahar28; #13785, @Rick-McCoy; #13676, @hyolim-e; #13718, @annzhang-db; #13705, @williamjamir
v2.17.2
Compare Source
MLflow 2.17.2 includes several major features and improvements
Fea
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.