You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Aug 13, 2024. It is now read-only.
When a goal is defined, it will be executed downwards through all the layers.
Each layer may break down a decision into two or more parts, which separately will be communicated to the layer below.
When the bottom layer, "TaskProsecution", is reached, one goal may consist of many different parts.
These parts will need to be aligned, so each task does not conflict with the others.
It is highly probable that more complex or large goals will end up with tasks that are contradictory or counteract each other.
There is also a risk that one singular task is in conflict with a moral or ethical principle that it is not a direct "child" of.
For these reasons, it is probably a very good idea to verify and align the decisions of each layer with the orders and principles in the layer above it.
In statecraft there are notions such as "Checks and Balances". This is hard to attain here, since we are talking about only one primary agent.
The proposed feedback system would lessen the risks and provide a more stable operating agency by double-checking that each decision or task is aligned with all higher value definitinos.
This would add calculation time, sure, but to be sure that the model actually does what it is supposed to do, this is absolutely necessary.
To alleviate the calculational burden, the following tools could be used (suggestions from ChatGPT):
Parallel Processing
Caching and Memory
Priority Queuing
Heuristics
Feedback Aggregation
Adaptive Learning
Threshold-based Activation
Distributed System
Predictive Analysis
Direct Interfaces
In any case, calculation cost can not and should not be an argument against controlling the quality of the outcome of the goals and missions.
I apoligize for the design. I just used draw.io to make this graphical diagram. The model draft itself is made with RDF.
This is just a draft of a proposed Decision Control. Any feedback is of course appreciated.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Here are some ideas related to Decision Control.
When a goal is defined, it will be executed downwards through all the layers.
Each layer may break down a decision into two or more parts, which separately will be communicated to the layer below.
When the bottom layer, "TaskProsecution", is reached, one goal may consist of many different parts.
These parts will need to be aligned, so each task does not conflict with the others.
It is highly probable that more complex or large goals will end up with tasks that are contradictory or counteract each other.
There is also a risk that one singular task is in conflict with a moral or ethical principle that it is not a direct "child" of.
For these reasons, it is probably a very good idea to verify and align the decisions of each layer with the orders and principles in the layer above it.
In statecraft there are notions such as "Checks and Balances". This is hard to attain here, since we are talking about only one primary agent.
The proposed feedback system would lessen the risks and provide a more stable operating agency by double-checking that each decision or task is aligned with all higher value definitinos.
This would add calculation time, sure, but to be sure that the model actually does what it is supposed to do, this is absolutely necessary.
To alleviate the calculational burden, the following tools could be used (suggestions from ChatGPT):
Parallel Processing
Caching and Memory
Priority Queuing
Heuristics
Feedback Aggregation
Adaptive Learning
Threshold-based Activation
Distributed System
Predictive Analysis
Direct Interfaces
In any case, calculation cost can not and should not be an argument against controlling the quality of the outcome of the goals and missions.
I apoligize for the design. I just used draw.io to make this graphical diagram. The model draft itself is made with RDF.
This is just a draft of a proposed Decision Control. Any feedback is of course appreciated.
Beta Was this translation helpful? Give feedback.
All reactions