Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incremental Y-Bus admittance matrix update #444

Merged
Show file tree
Hide file tree
Changes from 14 commits
Commits
Show all changes
74 commits
Select commit Hold shift + click to select a range
f551e8b
Incremental updates with decrement functionality, implemented without…
Jerry-Jinfeng-Guo Dec 5, 2023
5a96194
fixed minor sign error with decrement
Jerry-Jinfeng-Guo Dec 5, 2023
74f5d82
Fixed type cast complaints on linux/macos compilers; added logic to p…
Jerry-Jinfeng-Guo Dec 6, 2023
ada5e76
Added function interface in the `update.hpp`; next to add `std::view`…
Jerry-Jinfeng-Guo Dec 6, 2023
17548e5
Removed code 3 smells
Jerry-Jinfeng-Guo Dec 6, 2023
c5bf91d
Minor fix: clang tidy doesn't like copy constructor
Jerry-Jinfeng-Guo Dec 8, 2023
04c7796
Updated several duplicated code; better naming of variable (to avoid …
Jerry-Jinfeng-Guo Dec 18, 2023
aa7c7a5
Merge branch 'main' into feature/DGC-1950-Incrementally-update-topolo…
Jerry-Jinfeng-Guo Dec 18, 2023
e1abfef
fix clang-tidy warning
Jerry-Jinfeng-Guo Dec 18, 2023
28c21d3
Re-worked function structures
Jerry-Jinfeng-Guo Dec 19, 2023
a67f779
range copy fix
Jerry-Jinfeng-Guo Dec 19, 2023
d8eff9b
clang-tidy
Jerry-Jinfeng-Guo Dec 19, 2023
e0c57a7
Test case work in progress
Jerry-Jinfeng-Guo Dec 19, 2023
62e8678
Test case on the `y_bus` itself
Jerry-Jinfeng-Guo Dec 20, 2023
7ecb109
Fixed an index error: admittance matrix entry <-> parameter map
Jerry-Jinfeng-Guo Dec 20, 2023
a817271
clang tidy
Jerry-Jinfeng-Guo Dec 21, 2023
82d02c7
Reverted to `boost` for `iota`; fixed couple other things like functi…
Jerry-Jinfeng-Guo Dec 21, 2023
3581ef8
Merge branch 'main' into feature/DGC-1950-Incrementally-update-topolo…
Jerry-Jinfeng-Guo Dec 21, 2023
83e7ce8
Merge branch 'main' into feature/DGC-1950-Incrementally-update-topolo…
mgovers Dec 22, 2023
f968f8a
Fully boost; simplified a couple duplication in lambda; lowered UPPER…
Jerry-Jinfeng-Guo Dec 22, 2023
b2a89d3
[WIP] Draft integration: changed from delta based increment to whole …
Jerry-Jinfeng-Guo Jan 2, 2024
6e8392f
Integration. Increment\decrement\delta-based update.
Jerry-Jinfeng-Guo Jan 3, 2024
a51ca64
Merge branch 'main' into feature/DGC-1950-Incrementally-update-topolo…
Jerry-Jinfeng-Guo Jan 4, 2024
a010452
Merge branch 'main' into feature/DGC-1950-Incrementally-update-topolo…
Jerry-Jinfeng-Guo Jan 4, 2024
3067db8
Refactored the code; no more delta based in-/de-crements; swap based …
Jerry-Jinfeng-Guo Jan 5, 2024
fa69783
Brief code cleaning-up
Jerry-Jinfeng-Guo Jan 8, 2024
64dc9ac
Retouched the test case
Jerry-Jinfeng-Guo Jan 8, 2024
7ad1a67
Minor code smell removal
Jerry-Jinfeng-Guo Jan 8, 2024
78ae4b5
Merge branch 'main' into feature/DGC-1950-Incrementally-update-topolo…
Jerry-Jinfeng-Guo Jan 10, 2024
2bcc366
Manually merge conflict fix; namespace math_model_impl -> math_solver
Jerry-Jinfeng-Guo Jan 10, 2024
c9fd978
Merge branch 'main' into feature/DGC-1950-Incrementally-update-topolo…
mgovers Jan 11, 2024
a4eece8
Merge branch 'main' into feature/DGC-1950-Incrementally-update-topolo…
Jerry-Jinfeng-Guo Jan 16, 2024
2a2ff1a
Merge branch 'main' into feature/DGC-1950-Incrementally-update-topolo…
Jerry-Jinfeng-Guo Jan 16, 2024
59a8a7d
Merge branch 'main' into feature/DGC-1950-Incrementally-update-topolo…
mgovers Jan 17, 2024
409ea24
Tentative implementation integrating progressive `update_y_bus` with …
Jerry-Jinfeng-Guo Jan 19, 2024
5448fb3
Changes not included due to merger from master
Jerry-Jinfeng-Guo Jan 22, 2024
7231ecc
Suppress some compiler warnings.
Jerry-Jinfeng-Guo Jan 22, 2024
da1330a
minor fix
Jerry-Jinfeng-Guo Jan 22, 2024
d7b3acb
Merge branch 'main' into feature/DGC-1950-Incrementally-update-topolo…
mgovers Jan 23, 2024
04f8ed3
Fix and cleaned up the implementation.
Jerry-Jinfeng-Guo Jan 23, 2024
9e69d74
Code cleaning up. `set` to `vector` correction.
Jerry-Jinfeng-Guo Jan 24, 2024
382c4dc
Merge branch 'main' into feature/DGC-1950-Incrementally-update-topolo…
Jerry-Jinfeng-Guo Jan 24, 2024
bb0d92d
Changes processing review comments
Jerry-Jinfeng-Guo Jan 24, 2024
1f3199e
Minor improvement. string lookup avoided.
Jerry-Jinfeng-Guo Jan 25, 2024
ab3b523
Removed decrement logic from `y_bus`
Jerry-Jinfeng-Guo Jan 26, 2024
5d92cd6
Processed review comments.
Jerry-Jinfeng-Guo Jan 26, 2024
25f0392
Merge branch 'main' into feature/DGC-1950-Incrementally-update-topolo…
Jerry-Jinfeng-Guo Jan 26, 2024
5c5dd69
Review comments processing.
Jerry-Jinfeng-Guo Jan 26, 2024
8153eb6
Added `sequence_index_map` accumulation logic for repeated `update` call
Jerry-Jinfeng-Guo Jan 26, 2024
ab20979
Minor update main_model.hpp
Jerry-Jinfeng-Guo Jan 26, 2024
0a94100
Update main_model.hpp
Jerry-Jinfeng-Guo Jan 26, 2024
274a5a1
Update the `sequence_idx_map` accumulation logic to simply just accum…
Jerry-Jinfeng-Guo Jan 29, 2024
63b1e8a
Merge branch 'main' into feature/DGC-1950-Incrementally-update-topolo…
Jerry-Jinfeng-Guo Jan 29, 2024
68f41a7
Merge branch 'main' into feature/DGC-1950-Incrementally-update-topolo…
mgovers Jan 30, 2024
19346c1
Merge branch 'main' into feature/DGC-1950-Incrementally-update-topolo…
mgovers Jan 30, 2024
b449278
fix asym/sym switching edge case
mgovers Feb 1, 2024
1f3f936
Merge branch 'feature/DGC-1950-Incrementally-update-topology-paramete…
mgovers Feb 1, 2024
f6726d6
remove unused functions
mgovers Feb 1, 2024
616b6df
sonar cloud
mgovers Feb 1, 2024
97750dd
Sonar cloud, attempt 2
Jerry-Jinfeng-Guo Feb 1, 2024
8a40ef1
Sonar cloud, attempt 3
Jerry-Jinfeng-Guo Feb 1, 2024
19c95ee
Merge branch 'main' into feature/DGC-1950-Incrementally-update-topolo…
Jerry-Jinfeng-Guo Feb 2, 2024
9179778
Added test case for the new sym/asym alternating compute mode logic i…
Jerry-Jinfeng-Guo Feb 2, 2024
e09021d
Merge branch 'main' into feature/DGC-1950-Incrementally-update-topolo…
Jerry-Jinfeng-Guo Feb 2, 2024
5885897
Test case reworked; cleaned unneeded headers
Jerry-Jinfeng-Guo Feb 2, 2024
dc270b3
improve test case
mgovers Feb 5, 2024
57d3487
fix sonar cloud code smells
mgovers Feb 5, 2024
34f4050
fix more code smells
mgovers Feb 5, 2024
c67781d
Merge branch 'main' into feature/DGC-1950-Incrementally-update-topolo…
Jerry-Jinfeng-Guo Feb 5, 2024
3bb8ac0
params changed callback from ybus to math solver
mgovers Feb 5, 2024
f1161cc
clang-tidy pleasing
Jerry-Jinfeng-Guo Feb 5, 2024
bf404f6
resolve code smells
mgovers Feb 5, 2024
5f211c6
copy of admittance
mgovers Feb 5, 2024
8dbf135
make ybus owner of admittance data
mgovers Feb 5, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -159,6 +159,14 @@ template <bool sym> struct MathModelParam {
ComplexTensorVector<sym> source_param;
};

template <bool sym> struct MathModelParamIncrement {
std::vector<BranchCalcParam<sym>> branch_param;
ComplexTensorVector<sym> shunt_param;
ComplexTensorVector<sym> source_param;
std::vector<Idx> branch_param_to_change;
std::vector<Idx> shunt_param_to_change;
};

template <bool sym> struct PowerFlowInput {
ComplexVector source; // Complex u_ref of each source
ComplexValueVector<sym> s_injection; // Specified injection power of each load_gen
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@

#include "../all_components.hpp"

#include <ranges>

namespace power_grid_model::main_core {

namespace detail {
Expand Down Expand Up @@ -82,6 +84,33 @@ inline void update_y_bus(YBus<sym>& y_bus, std::shared_ptr<MathModelParam<sym> c
y_bus.update_admittance(math_model_param);
}

template <bool sym>
inline void update_y_bus_increment(YBus<sym>& y_bus, std::shared_ptr<MathModelParam<sym> const> const& math_model_param,
bool increment) {
auto branch_param_to_change_views =
std::views::iota(0, math_model_param->branch_param.size()) | std::views::filter([&math_model_param](Idx i) {
mgovers marked this conversation as resolved.
Show resolved Hide resolved
Jerry-Jinfeng-Guo marked this conversation as resolved.
Show resolved Hide resolved
return math_model_param->branch_param[i].yff() != ComplexTensor<sym>{0.0} ||
math_model_param->branch_param[i].yft() != ComplexTensor<sym>{0.0} ||
math_model_param->branch_param[i].ytf() != ComplexTensor<sym>{0.0} ||
math_model_param->branch_param[i].ytt() != ComplexTensor<sym>{0.0};
});
auto shunt_param_to_change_views =
std::views::iota(0, math_model_param->shunt_param.size()) | std::views::filter([&math_model_param](Idx i) {
return math_model_param->shunt_param[i] != ComplexTensor<sym>{0.0};
});

MathModelParamIncrement<sym> math_model_param_incrmt;
Jerry-Jinfeng-Guo marked this conversation as resolved.
Show resolved Hide resolved
math_model_param_incrmt.branch_param = math_model_param->branch_param;
math_model_param_incrmt.shunt_param = math_model_param->shunt_param;
math_model_param_incrmt.source_param = math_model_param->source_param; // not sure if we actually need this
math_model_param_incrmt.branch_param_to_change = branch_param_to_change_views;
math_model_param_incrmt.shunt_param_to_change = shunt_param_to_change_views;

auto math_model_param_incrmt_ptr = std::make_shared<MathModelParamIncrement<sym> const>(math_model_param_incrmt);

y_bus.update_admittance_increment(math_model_param_incrmt_ptr, !increment);
}

template <bool sym>
inline void update_y_bus(MathState& math_state, std::vector<MathModelParam<sym>> const& math_model_params,
Idx n_math_solvers) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@
#include "../sparse_mapping.hpp"
#include "../three_phase_tensor.hpp"

#include <ranges>

namespace power_grid_model {

// hide implementation in inside namespace
Expand Down Expand Up @@ -72,6 +74,7 @@ struct YBusStructure {
// element of ybus of all components
std::vector<YBusElement> y_bus_element;
std::vector<Idx> y_bus_entry_indptr;
std::vector<MatrixPos> y_bus_pos_in_entries;
// sequence entry of bus data
IdxVector bus_entry;
// LU csr structure for the y bus with sparse fill-ins
Expand Down Expand Up @@ -206,6 +209,7 @@ struct YBusStructure {
// all entries in the same position are looped, append indptr
// need to be offset by fill-in
y_bus_entry_indptr.push_back((it_element - vec_map_element.cbegin()) - fill_in_counter);
y_bus_pos_in_entries.push_back(pos);
// iterate linear nnz
++nnz_counter;
++nnz_counter_lu;
Expand Down Expand Up @@ -350,6 +354,79 @@ template <bool sym> class YBus {
admittance_ = std::make_shared<ComplexTensorVector<sym> const>(std::move(admittance));
}

// void increments_to_entries(std::shared_ptr<IdxVector> affected_entries) {
IdxVector increments_to_entries() {
// construct affected entries
IdxVector affected_entries = {};
auto const& y_bus_pos_in_entries = y_bus_struct_->y_bus_pos_in_entries;
auto process_params = [&affected_entries, &y_bus_pos_in_entries, this](auto params_to_change, bool is_branch) {
for (auto param_to_change : params_to_change) {
Idx id_x{0};
Idx id_y{0};
if (is_branch) {
id_x = math_topology_->branch_bus_idx[param_to_change][0];
id_y = math_topology_->branch_bus_idx[param_to_change][1];
} else {
id_x = math_topology_->shunts_per_bus.get_group(param_to_change);
id_y = math_topology_->shunts_per_bus.get_group(param_to_change);
}
auto it =
std::ranges::find(y_bus_pos_in_entries.begin(), y_bus_pos_in_entries.end(), MatrixPos{id_x, id_y});
if (it != y_bus_pos_in_entries.end()) {
affected_entries.push_back(std::distance(y_bus_pos_in_entries.begin(), it));
}
}
};

process_params(math_model_param_incrmt_->branch_param_to_change, true);
process_params(math_model_param_incrmt_->shunt_param_to_change, false);
return affected_entries;
}

void update_admittance_increment(std::shared_ptr<MathModelParamIncrement<sym> const> const& math_model_param_incrmt,
bool is_decrement = false) {
Jerry-Jinfeng-Guo marked this conversation as resolved.
Show resolved Hide resolved
const double mode = is_decrement ? -1.0 : 1.0;
if (!is_decrement) {
Jerry-Jinfeng-Guo marked this conversation as resolved.
Show resolved Hide resolved
// overwrite the old cached parameters in increment mode (update)
math_model_param_incrmt_ = math_model_param_incrmt;
} else if (!decremented_) {
math_model_param_incrmt_ = {}; // reset recorded increment that is already decremented
return;
}

// construct admittance data
ComplexTensorVector<sym> admittance(nnz());
mgovers marked this conversation as resolved.
Show resolved Hide resolved
assert(Idx(admittance_->size()) == nnz());
std::ranges::copy(*admittance_, admittance.begin());
auto const& y_bus_element = y_bus_struct_->y_bus_element;
auto const& y_bus_entry_indptr = y_bus_struct_->y_bus_entry_indptr;

// construct affected entries
auto const affected_entries = increments_to_entries();

// process and update affected entries
for (auto const entry : affected_entries) {
// start admittance accumulation with zero
ComplexTensor<sym> entry_admittance{0.0};
// loop over all entries of this position
for (Idx element = y_bus_entry_indptr[entry]; element != y_bus_entry_indptr[entry + 1]; ++element) {
if (y_bus_element[element].element_type == YBusElementType::shunt) {
// shunt
entry_admittance += math_model_param_incrmt_->shunt_param[y_bus_element[element].idx];
} else {
// branch
entry_admittance += math_model_param_incrmt_->branch_param[y_bus_element[element].idx]
.value[static_cast<Idx>(y_bus_element[element].element_type)];
}
}
// assign
admittance[entry] += mode * entry_admittance;
}
// move to shared ownership
admittance_ = std::make_shared<ComplexTensorVector<sym> const>(std::move(admittance));
decremented_ = is_decrement;
}

ComplexValue<sym> calculate_injection(ComplexValueVector<sym> const& u, Idx bus_number) const {
Idx const begin = row_indptr()[bus_number];
Idx const end = row_indptr()[bus_number + 1];
Expand Down Expand Up @@ -428,6 +505,10 @@ template <bool sym> class YBus {

// cache the math parameters
std::shared_ptr<MathModelParam<sym> const> math_model_param_;

// cache the increment math parameters
std::shared_ptr<MathModelParamIncrement<sym> const> math_model_param_incrmt_;
bool decremented_ = false;
Jerry-Jinfeng-Guo marked this conversation as resolved.
Show resolved Hide resolved
};

template class YBus<true>;
Expand Down
179 changes: 178 additions & 1 deletion tests/cpp_unit_tests/test_y_bus.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@

#include <doctest/doctest.h>

#include <ranges>

namespace power_grid_model {

TEST_CASE("Test y bus") {
Expand All @@ -32,7 +34,7 @@ TEST_CASE("Test y bus") {
^ | |
| | 5
--- 0 --- |
X
X
*/

MathModelTopology topo{};
Expand Down Expand Up @@ -272,6 +274,181 @@ TEST_CASE("Test fill-in y bus") {
CHECK(ybus.map_lu_y_bus == map_lu_y_bus);
}

TEST_CASE("Incremental update y-bus") {
Jerry-Jinfeng-Guo marked this conversation as resolved.
Show resolved Hide resolved
/*
mgovers marked this conversation as resolved.
Show resolved Hide resolved
test Y bus struct
[
x, x, 0, 0
x, x, x, 0
0, x, x, x
0, 0, x, x
]

[0] = Node
--0--> = Branch (from --id--> to)
-X- = Open switch / not connected

Topology:

--- 4 --- ----- 3 -----
| | | |
| v v |
[0] [1] --- 1 --> [2] --- 2 --> [3]
^ | |
| | 5
--- 0 --- |
X
*/
Jerry-Jinfeng-Guo marked this conversation as resolved.
Show resolved Hide resolved

MathModelTopology topo{};
MathModelParam<true> param_sym;
topo.phase_shift.resize(4, 0.0);
topo.branch_bus_idx = {
{1, 0}, // branch 0 from node 1 to 0
{1, 2}, // branch 1 from node 1 to 2
{2, 3}, // branch 2 from node 2 to 3
{3, 2}, // branch 3 from node 3 to 2
{0, 1}, // branch 4 from node 0 to 1
{2, -1} // branch 5 from node 2 to "not connected"
};
param_sym.branch_param = { // ff, ft, tf, tt
{1.0i, 2.0i, 3.0i, 4.0i}, // { 1, 0 }
{5.0, 6.0, 7.0, 8.0}, // { 1, 2 }
{9.0i, 10.0i, 11.0i, 12.0i}, // { 2, 3 }
{13.0, 14.0, 15.0, 16.0}, // { 3, 2 }
{17.0, 18.0, 19.0, 20.0}, // { 0, 1 }
{1000i, 0.0, 0.0, 0.0}}; // { 2, -1 }
topo.shunts_per_bus = {from_sparse, {0, 1, 1, 1, 2}}; // 4 buses, 2 shunts -> shunt connected to bus 0 and bus 3
param_sym.shunt_param = {100.0i, 200.0i};

// get shared ptr
auto topo_ptr = std::make_shared<MathModelTopology const>(topo);

const ComplexTensorVector<true> admittance_sym = {
17.0 + 104.0i, // 0, 0 -> {1, 0}tt + {0, 1}ff + shunt(0) = 4.0i + 17.0 + 100.0i
18.0 + 3.0i, // 0, 1 -> {0, 1}ft + {1, 0}tf = 18.0 + 3.0i
19.0 + 2.0i, // 1, 0 -> {0, 1}tf + {1, 0}ft = 19.0 + 2.0i
25.0 + 1.0i, // 1, 1 -> {0, 1}tt + {1, 0}ff + {1,2}ff = 20.0 + 1.0i + 5.0
6.0, // 1, 2 -> {1,2}ft = 6.0
7.0, // 2, 1 -> {1,2}tf = 7.0
24.0 + 1009.0i, // 2, 2 -> {1,2}tt + {2,3}ff + {3, 2}tt + {2,-1}ff = 8.0 + 9.0i + 16.0 + 1000.0i = 24.0 + 1009i
15.0 + 10.0i, // 2, 3 -> {2,3}ft + {3,2}tf = 10.0i + 15.0
14.0 + 11.0i, // 3, 2 -> {2,3}tf + {3,2}ft = 11.0i + 14.0
13.0 + 212.0i // 3, 3 -> {2,3}tt + {3,2}ff + shunt(1) = 12.0i + 13.0 + 200.0i
};

const ComplexTensorVector<true> admittance_sym_state_1 = {34.0 + 208.0i, 36.0 + 6.0i, 38.0 + 4.0i, 50.0 + 2.0i,
12.0, 14.0, 48.0 + 2018.0i, 30.0 + 20.0i,
14.0 + 22.0i, 26.0 + 424.0i};
topo.branch_bus_idx = {
{1, 0}, // branch 0 from node 1 to 0
{1, 2}, // branch 1 from node 1 to 2
{2, 3}, // branch 2 from node 2 to 3
{3, 2}, // branch 3 from node 3 to 2
{0, 1}, // branch 4 from node 0 to 1
{2, -1} // branch 5 from node 2 to "not connected"
};
MathModelParam<true> param_sym_update;
param_sym_update.branch_param = { // ff, ft, tf, tt
{1.0i, 0.0, 0.0, 0.0}, // {1, 0}
{0.0, 1.0, 0.0, 0.0}, // {1, 2}
{0.0, 0.0, 0.0, 2.0i}, // {2, 3}
{0.0, 0.0, 2.0, 0.0}, // {3, 2}
{0.0, 0.0, 0.0, 0.0}, // {0, 1}
{1.0i, 0.0, 0.0, 0.0}}; // {2, -1}
param_sym_update.shunt_param = {1.0i, 0.0i};

const ComplexTensorVector<true> admittance_sym_2 = {
// 17.0 + 104.0i, [v]
17.0 + 105.0i, // 0, 0 -> += {1, 0}tt + {0, 1}ff + shunt(0) = 0.0 + 0.0 + 1.0i
// 18.0 + 3.0i, [v]
18.0 + 3.0i, // 0, 1 -> += {0, 1}ft + {1, 0}tf = 0.0 + 0.0
// 19.0 + 2.0i, [v]
19.0 + 2.0i, // 1, 0 -> += {0, 1}tf + {1, 0}ft = 0.0 + 0.0
// 25.0 + 1.0i, [v]
25.0 + 2.0i, // 1, 1 -> += {0, 1}tt + {1, 0}ff + {1,2}ff = 0.0 + 1.0i + 0.0
// 6.0, [v]
7.0, // 1, 2 -> += {1,2}ft = 1.0
// 7.0, [v]
7.0, // 2, 1 -> += {1,2}tf = 0.0
// 24.0 + 1009.0i, [v]
24.0 + 1010.0i, // 2, 2 -> += {1,2}tt + {2,3}ff + {3, 2}tt + {2,-1}ff = 0.0 + 0.0 + 0.0 + 1.0i
// 15.0 + 10.0i, [v]
17.0 + 10.0i, // 2, 3 -> += {2,3}ft + {3,2}tf = 0.0 + 2.0
// 14.0 + 11.0i, [v]
14.0 + 11.0i, // 3, 2 -> += {2,3}tf + {3,2}ft = 0.0 + 0.0
// 13.0 + 212.0i [v]
13.0 + 214.0i // 3, 3 -> += {2,3}tt + {3,2}ff + shunt(1) = 2.0i + 0.0 + 0.0
};

SUBCASE("Test whole scale update") {
YBus<true> ybus{topo_ptr, std::make_shared<MathModelParam<true> const>(param_sym)};
CHECK(ybus.admittance().size() == admittance_sym.size());
for (size_t i = 0; i < admittance_sym.size(); i++) {
CHECK(cabs(ybus.admittance()[i] - admittance_sym[i]) < numerical_tolerance);
Jerry-Jinfeng-Guo marked this conversation as resolved.
Show resolved Hide resolved
}
ybus.update_admittance(std::make_shared<MathModelParam<true> const>(param_sym));

CHECK(ybus.admittance().size() == admittance_sym.size());
for (size_t i = 0; i < admittance_sym.size(); i++) {
CHECK(cabs(ybus.admittance()[i] - admittance_sym[i]) < numerical_tolerance);
}
}

SUBCASE("Test progressive update") {
YBus<true> ybus{topo_ptr, std::make_shared<MathModelParam<true> const>(param_sym)};
CHECK(ybus.admittance().size() == admittance_sym.size());
for (size_t i = 0; i < admittance_sym.size(); i++) {
CHECK(cabs(ybus.admittance()[i] - admittance_sym[i]) < numerical_tolerance);
}
auto branch_param_to_change_views =
std::views::iota(0, static_cast<int>(param_sym_update.branch_param.size())) |
std::views::filter([&param_sym_update](Idx i) {
return param_sym_update.branch_param[i].yff() != ComplexTensor<true>{0.0} ||
param_sym_update.branch_param[i].yft() != ComplexTensor<true>{0.0} ||
param_sym_update.branch_param[i].ytf() != ComplexTensor<true>{0.0} ||
param_sym_update.branch_param[i].ytt() != ComplexTensor<true>{0.0};
});
auto shunt_param_to_change_views = std::views::iota(0, static_cast<int>(param_sym_update.shunt_param.size())) |
std::views::filter([&param_sym_update](Idx i) {
return param_sym_update.shunt_param[i] != ComplexTensor<true>{0.0};
});

MathModelParamIncrement<true> math_model_param_incrmt;
math_model_param_incrmt.branch_param = param_sym_update.branch_param;
math_model_param_incrmt.shunt_param = param_sym_update.shunt_param;
math_model_param_incrmt.source_param = param_sym_update.source_param; // not sure if we actually need this
math_model_param_incrmt.branch_param_to_change = {branch_param_to_change_views.begin(),
branch_param_to_change_views.end()};
math_model_param_incrmt.shunt_param_to_change = {shunt_param_to_change_views.begin(),
shunt_param_to_change_views.end()};

auto math_model_param_incrmt_ptr =
std::make_shared<MathModelParamIncrement<true> const>(math_model_param_incrmt);

ybus.update_admittance_increment(math_model_param_incrmt_ptr, false);
// check increment
CHECK(ybus.admittance().size() == admittance_sym_2.size());
for (size_t i = 0; i < admittance_sym_2.size(); i++) {
CHECK(cabs(ybus.admittance()[i] - admittance_sym_2[i]) < numerical_tolerance);
}
Jerry-Jinfeng-Guo marked this conversation as resolved.
Show resolved Hide resolved

ybus.update_admittance_increment(math_model_param_incrmt_ptr, true);
// check decrement
CHECK(ybus.admittance().size() == admittance_sym.size());
for (size_t i = 0; i < admittance_sym.size(); i++) {
CHECK(cabs(ybus.admittance()[i] - admittance_sym[i]) < numerical_tolerance);
}

ybus.update_admittance_increment(math_model_param_incrmt_ptr, true);
// check second decrement, which should not change anything
CHECK(ybus.admittance().size() == admittance_sym.size());
for (size_t i = 0; i < admittance_sym.size(); i++) {
CHECK(cabs(ybus.admittance()[i] - admittance_sym[i]) < numerical_tolerance);
}
}
}

/*
TODO:
- test counting_sort_element()
Expand Down
Loading