-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
0 parents
commit a9eb420
Showing
22 changed files
with
6,040 additions
and
0 deletions.
There are no files selected for viewing
Empty file.
Large diffs are not rendered by default.
Oops, something went wrong.
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
Sitemap: https://cjgo.github.io/ChewC/sitemap.xml |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
[ | ||
{ | ||
"objectID": "exp1.html", | ||
"href": "exp1.html", | ||
"title": "ChewC", | ||
"section": "", | ||
"text": "Truncation Selection Intensity * Budget\nThe RL agent will be given limited controls; it will only be able to select the top % of individuals used in random mating blocks\nHypothesis : we will see how the RL agent changes this % early versus later stages of the breeding program which will have an allotted time. e.g. 5 years vs 50 years vs 500 years and we will be able to see what dynamics it generally follows. Also we can give it a budget\nProcedure\n\n\nSet up breeding program(founder_pop, burn-in, traits…)\n\nInitial Rule-Based Selection: We begin with a fixed truncation selection, say selecting the top 20% of individuals based on their phenotype for the trait we want to improve. DRL Agent’s Action: The agent’s action is not to choose a new selection percentage outright, but rather to adjust the existing selection intensity. It would output a value that represents a change to the current 20%. For example: An action of +5 would increase the selection intensity to 25%. An action of -3 would decrease it to 17%. Bounded Action Space: We would define a range for these adjustments to keep the action space manageable and prevent extreme selections. For instance, the agent might only be able to adjust the selection intensity within ±10% of the initial value.\nuse an agent to decide which QTL to gene edit in a given line… plus budget etc… idea is that it will pick clever QTL based not on estimated additive value but also factor in haplotype diversity\nmay be interesting to see if it waits later in the breeding program to select/use budget for gene edits e.g. it waits til later stages of breeding program once the haplotype population structure plays out after generations of mendelian sampling.", | ||
"crumbs": [ | ||
"Procedure" | ||
] | ||
} | ||
] |
Large diffs are not rendered by default.
Oops, something went wrong.
Binary file not shown.
Large diffs are not rendered by default.
Oops, something went wrong.
Large diffs are not rendered by default.
Oops, something went wrong.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
Large diffs are not rendered by default.
Oops, something went wrong.
Oops, something went wrong.