-
Notifications
You must be signed in to change notification settings - Fork 12
Conversation
Hi @iterative/websites ! I would love your assistance in preparing this suggestion / use case Can you please help me out with this - provide me some simple tab structure/component in this branch I can then use? |
Link Check ReportThere were no links to check! |
@omesser, if you would like to make something like this (from https://cml.dev/doc/cml-with-dvc): Then you need to use something like
see this file as an example https://raw.githubusercontent.com/iterative/cml.dev/master/content/docs/cml-with-dvc.md Curious to see how it will look like) |
In addition to @aguschin suggestion of using tabs, I think we can also use <details>
### Title
...
content
...
</details> |
Thanks @aguschin - that toggle/tab system is exactly what I was looking for 🙏
Re |
fa35c9a
to
02c7687
Compare
02c7687
to
6788c6e
Compare
@aguschin @mike0sv @jorgeorpinel - please take a look and compare this with Would like to get your thoughts |
For me this feels like a lot of text for one page 😄 I mean that I'd probably cut a lot of extra explanations and leave just something like this "Here's how to do X: U: see #244 |
6788c6e
to
1132e19
Compare
Kinda agree with @mike0sv - the page is long and I was thinking to hide serve/build/deploy behind a single toggle (probably with deploy as page that we show by default). But this reminds me of discussion we had in #190 (comment), TLDR, there I suggested something like:
And this ^ is a configuration page to my mind, not a GS. So if we unite serve/build/deploy behind a single toggle, I'd like not to get this configuration page, but get a Get Started. As a suggestion: we can keep in mind that we'll have this configuration page in some future. How to write GS so it would complement, but not replace that page? |
@yathomasi, is there a way to achieve something similar? Like nested toggles? Tried this and it doesn't work:
|
It is available @aguschin. We have already started using nested tabs eg: https://dvc.org/doc/install/completion#what-shell-do-you-have. Also, the syntax is correct. It looks like there is an issue with mlem.ai I will take a look. |
I just checked with the exact code you have provided on the current branch and it seems to be working. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After a discussion with @mike0sv and @omesser we decided:
- select this variant of GS instead of others (current with apply/build/serve/deploy; Get Started updates #228; Simplify get started very condenced #244)
- @omesser will make another PR on top of this taking some changes from Simplify get started very condenced #244
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor? I would not use the term "tutorial". We don't really have any proper tutorials here (maybe in the company blog though). Tutorial: walkthrough solving specific/real-life problem.
Sorry, late review but I hope still relevant since there seems to be a plan to follow up on this per
I turned #244 into a draft for now, BTW.
Yep, not sure why this is called condensed when it adds back several things we had previously consciously removed (e.g.
Keep in mind this requires a lot of clicking so it's not ideal for this doc. Not even sure about the plain tabs TBH (GS should be linear ideally) -- but I guess it can be fine as long as the "happy path" is always visible in the first tabs. |
@jorgeorpinel in this case tabs should work better I think (similar to GitHub/GitLab) in CML. MLEM is not DVC. Get started doesn't have to be same and/or complicated as in DVC. It is condensed since it's a single page vs multiple, less clicks and text overall I hope. Other things - yes, we should be clarifying and reviewing titles if that makes sense). Write your specific concerns somewhere please. |
I have |
and move some info to the UG index page rel. #238 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
<tab title="Command Line"> | ||
|
||
### Batch scoring |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not seeing a batch operation here..? Is there a better term than "batch" here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is indeed batch scoring (using the model to infer on a concrete dataset). Maybe the confusion is because the "batchness" is not demonstrated clearly. the example uses a "batch" of a single vector, but the csv (or whatever format) file here can be of arbitrary size (multiple rows in the csv) and the apply command will run through the entire datasets provided and infer the model with it.
@aguschin - Maybe the example should be slightly modified to emphasize the above - just echo
ing another line should do it :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jorgeorpinel will it fix your confusion? Or maybe you refered to something like --batch_size X
param that will process data in chunks of len X? Like:
$ mlem apply models/rf new_data.csv \
--method predict_proba \
--import \
--import-type "pandas[csv]" \
--batch_size 1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the example uses a "batch" of a single vector, but the csv... can be of arbitrary size
--batch_size 1
Can we make the example more realistic, so that the batch is actually a batch (a group of things) ? 🙂
But I'm getting the feeling that the batchness aspect is not important here, which agains brings me to suggest we rename this to something like Offline model scoring (Idk, need your input to find a good title). Otherwise term "batch" may be a distraction (in the title).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's still very much a toy example (that's fine for the get-started imo) - but took a stab at it here - simple making the dataset not a single vector but 2 should get the idea across imo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK from this I get that it is important to stress the batchness. Having 2 lines instead of 1 sounds simple and good. Thanks
## Local model serving | ||
|
||
First thing first, let's run a model server locally on our machine. To launch a | ||
local FastAPI model server, simply run: | ||
|
||
```cli | ||
$ mlem serve fastapi --model models/rf |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought we had decided to leave this out of the GS? Focus on deployment (which is more relevant and includes serving).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not aware of this - it depends how aggressively we want to cut length here. @aguschin - thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I bet almost every user would run mlem serve
multiple times for his model until he decides to do mlem deploy
. So it's good that we introduce it here.
Also, off topic - maybe we need to mlem serve streamlit
(see UG) in GS? It will have a nicer look I assume. (from the other hand streamlit is not that polished now, and we use it in every blog post anyway - so keeping fastapi for now may be a better option).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good, we should keep it then.
maybe we need to mlem serve streamlit
Up to you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think fastapi works better for us - better adoption and swagger api spec are a great place to showcase a "by the book" RESTapi server
example FastAPI or RabbitMQ. Here we'll check out how it works with FastAPI | ||
since serving models via a REST API is a very common use case. | ||
|
||
## Local model serving |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Confusing structure... (see this section and its sub-sections)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I get what you mean here exactly - I totally see a room for confusion between serving and deploying. I think it might be a real confusion in MLEM's UX we need to think how to improve.
Is that what you mean? or something else? can you elaborate a bit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've started addressing this in a follow-up PR. Here's some more context:#265 (review)
room for confusion between serving and deploying
Exactly. I'm not sure yet if it's something docs-specific or more of a product question (UX) indeed. But surely we can at least ameliorate the issue with good docs structure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's a real product UX question - cc @aguschin @mike0sv let's discuss this a bit, brainstorm if we can simplify or communicate this differently - I get the confusion @jorgeorpinel has here 💯 .
For now I've built this and the next proposed iteration of get-started taking the current abstraction/structure as given (deployment builds on top serving) since the actual code is structured around this concept
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, thanks. We may have conflicts among PRs though... I'll focus on finishing mine first.
Please see #265 for some more concerns and follow-ups. |
https://mlem-ai-simplify-get-st-rnmia5.herokuapp.com/doc/get-started
Here, I'll create an alternative where there is a single get-started scenario, but will use some alternative paths in this scenarios (so the user can choose).
Details: