Skip to content

Commit

Permalink
Merge branch 'main' into feature/rabbitmq_worker
Browse files Browse the repository at this point in the history
  • Loading branch information
1martin1 authored Jan 22, 2025
2 parents 21f2c43 + c3eaa57 commit 12597f7
Show file tree
Hide file tree
Showing 5 changed files with 18 additions and 23 deletions.
2 changes: 1 addition & 1 deletion README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@
- `Чат поддержки <https://t.me/protollm_helpdesk>`_

Статьи о решениях, основанных на ProtoLLM:
========================================
==========================================
- Zakharov K. et al. Forecasting Population Migration in Small Settlements Using Generative Models under Conditions of Data Scarcity //Smart Cities. – 2024. – Т. 7. – №. 5. – С. 2495-2513.
- Kovalchuk M. A. et al. SemConvTree: Semantic Convolutional Quadtrees for Multi-Scale Event Detection in Smart City //Smart Cities. – 2024. – Т. 7. – №. 5. – С. 2763-2780.
- Kalyuzhnaya A. et al. LLM Agents for Smart City Management: Enhancing Decision Support through Multi-Agent AI Systems - 2024 - Under Review
Expand Down
22 changes: 11 additions & 11 deletions README_en.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,24 +25,24 @@ Intro
Proto LLM features
==================
- Rapid prototyping of information retrieval systems based on LLM using RAG:
Implementations of architectural patterns for interacting with different databases and web service interfaces;
Methods for optimising RAG pipelines to eliminate redundancy.
Implementations of architectural patterns for interacting with different databases and web service interfaces;
Methods for optimising RAG pipelines to eliminate redundancy.

- Development and integration of applications with LLM with connection of external services and models through plugin system:
Integration with AutoML solutions for predictive tasks;
Providing structured output generation and validation;
Integration with AutoML solutions for predictive tasks;
Providing structured output generation and validation;

- Implementation of ensemble methods and multi-agent approaches to improve the efficiency of LLMs:
Possibility of combining arbitrary LLMs into ensembles to improve generation quality, automatic selection of ensemble composition;
Work with model-agents and ensemble pipelines;
Possibility of combining arbitrary LLMs into ensembles to improve generation quality, automatic selection of ensemble composition;
Work with model-agents and ensemble pipelines;

- Generation of complex synthetic data for further training and improvement of LLM:
Generating examples from existing models and data sets;
Evolutionary optimisation to increase the diversity of examples; Integration with Label Studio;
Generating examples from existing models and data sets;
Evolutionary optimisation to increase the diversity of examples; Integration with Label Studio;

- Providing interoperability with various LLM providers:
Support for native models (GigaChat, YandexGPT, vsegpt, etc.).
Interaction with open-source models deployed locally.
Support for native models (GigaChat, YandexGPT, vsegpt, etc.).
Interaction with open-source models deployed locally.


Installation
Expand Down Expand Up @@ -106,7 +106,7 @@ Contacts
- `Helpdesk chat <https://t.me/protollm_helpdesk>`_

Papers about ProtoLLM-based solutions:
=====================================
======================================
- Zakharov K. et al. Forecasting Population Migration in Small Settlements Using Generative Models under Conditions of Data Scarcity //Smart Cities. – 2024. – Т. 7. – №. 5. – С. 2495-2513.
- Kovalchuk M. A. et al. SemConvTree: Semantic Convolutional Quadtrees for Multi-Scale Event Detection in Smart City //Smart Cities. – 2024. – Т. 7. – №. 5. – С. 2763-2780.
- Kalyuzhnaya A. et al. LLM Agents for Smart City Management: Enhancing Decision Support through Multi-Agent AI Systems - 2024 - Under Review
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -49,19 +49,16 @@ def publish_message(self, queue_name: str, message: dict, priority: int = None):
"""
try:
with self.get_channel() as channel:
# Declare the queue with priority if specified
arguments = {}
if priority is not None:
arguments['x-max-priority'] = 10 # Set the maximum priority level
arguments['x-max-priority'] = 10

channel.queue_declare(queue=queue_name, durable=True, arguments=arguments)

# Publish the message with the specified priority
properties = pika.BasicProperties(
delivery_mode=2, # Make message persistent
priority=priority if priority is not None else 0 # Default to 0 if no priority
delivery_mode=2,
priority=priority if priority is not None else 0
)

channel.basic_publish(
exchange='',
routing_key=queue_name,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,20 +46,18 @@ def test_publish_message_with_priority(rabbit_wrapper, mock_pika):

rabbit_wrapper.publish_message(queue_name, message, priority=priority)

# Проверяем, что очередь была объявлена с аргументом 'x-max-priority'
mock_pika.queue_declare.assert_called_once_with(
queue=queue_name,
durable=True,
arguments={"x-max-priority": 10} # Убедитесь, что максимальный приоритет соответствует вашему коду
arguments={"x-max-priority": 10}
)

# Проверяем, что сообщение было опубликовано с заданным приоритетом
mock_pika.basic_publish.assert_called_once_with(
exchange="",
routing_key=queue_name,
body=json.dumps(message),
properties=pika.BasicProperties(
delivery_mode=2, # Сделать сообщение постоянным
delivery_mode=2,
priority=priority
),
)
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
[tool.poetry]
readme = "README_en.rst"
name = "ProtoLLM"
version = "0.1.0"
description = ""
authors = ["aimclub"]
readme = "README.rst"


[tool.poetry.dependencies]
Expand Down

0 comments on commit 12597f7

Please sign in to comment.