Skip to content

Actions: VJHack/llama.cpp

Publish Docker image

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
81 workflow runs
81 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

[SYCL]set context default value to avoid memory issue, update guide (…
Publish Docker image #6: Commit faf67b3 pushed by VJHack
September 18, 2024 01:25 32m 17s master
September 18, 2024 01:25 32m 17s
ggml : move common CPU backend impl to new header (#9509)
Publish Docker image #5: Commit 23e0d70 pushed by VJHack
September 17, 2024 01:49 44m 46s master
September 17, 2024 01:49 44m 46s
py : add "LLaMAForCausalLM" conversion support (#9485)
Publish Docker image #4: Commit 3c7989f pushed by VJHack
September 15, 2024 14:27 9m 9s master
September 15, 2024 14:27 9m 9s
made loading message more descriptive
Publish Docker image #3: Commit 739ea75 pushed by VJHack
September 13, 2024 04:14 57m 30s master
September 13, 2024 04:14 57m 30s
Merge branch 'ggerganov:master' into master
Publish Docker image #2: Commit 69c97bb pushed by VJHack
September 13, 2024 03:14 2m 56s master
September 13, 2024 03:14 2m 56s
account for both api and web browser requests
Publish Docker image #1: Commit cb13382 pushed by VJHack
September 13, 2024 02:44 20m 4s master
September 13, 2024 02:44 20m 4s