Key Questions: Rules/Constraints in Cosmos

This document is intended as a dump of many “big picture” unanswered questions known to be triggered by content elsewhere in the Key Docs. It was created as a general “shelf” for big, potentially critical questions to grapple with as a community. As the process of delving into proposals in the Key Docs proceeds, contents from here may become merged into specific conversations.

Rules, Constraints and Limiting/Negative Feedback Loops in Cosmos

Some open questions:

· Regarding the dynamic tension between curated and open spaces, articulating the policies of what gets curated, how works/tasks/people are evaluated, etc. Still to be figured out: Curation filters, degrees, and logic.

· Membership is curated, in that access is tied to system capacity & priorities—politics & issue of intrinsic biases of who are the gatekeepers!

· Certain domains of publication are curated, whereas others are open, but open works have the potential to work their way up to the exclusive spaces “on merit” (LC, upvotes, etc.)?

· What is the relationship/intersectionality between Cosmos-authorized editorial boards and spaces for creators’ pitching/pilots?

· Define how co-op would track “use basis” for calculating patronage refund, given the variety of uses of Cosmos (is it solely based on a calculation of LC activity?)

Cost-benefit analysis—how much is the right amount of certain functions to be provided? E.g. paid community guides—how does Cosmos measure success of individuals in these positions and of the efficacy of the positions overall? Ditto for other key roles or initiatives. Defining what are the feedback loops (self-awareness) and how will Cosmos respond to feedback measured/received?

All of missing figures need to be priced in the “sweet spot” between reasonable in the consumer’s eyes for the value received, and also operating at a profit in order to be sustainable (profits are redistributed back to members anyway.) OR: could we code for a dynamic sweet spot, that is in fact mutually evolving dependent upon member & participation levels and the algorithms controlling effects dependently?

Issue of priority placed on member cultural/experiential diversity:

Politics of inherent biases. How do we cultivate broad tastes among members on our platform? How do we ensure platform and component accessibility to anyone who could benefit from Cosmos? How do we intensify our search for hiqh-quality, edgy (integrate-able?) difference? How do we discern harming divergence from innovation/mutation worthy of further exploration, dynamically, as signals are emerging?

Anyone can create private subgroups in Cosmos–it’s not that all of Cosmos “must” have some articulated standard of what diversity or equity looks like in conversations/the organization. Again, how much do we “legislate” at the cultural level (in the Constitution) vs. how much do we let participants innovate on cultural norms and set autonomous boundaries? Intellectual & cultural tastes and implicit biases in the composition of people in the Cosmos system can and do filter outward and can affect collective cultural norms/behaviors in undesirable ways, ways that risks accidentally replicating “old wave” oppressive patterns/structures (which means C* not authentically making progress towards its mission.) When oppressive patterns are unconsciously brough into the system by habitual programming of humans on it, and not by intentional design nor attention to correcting course through feedback—what is the appropriate remedy? One especially ripe area of exploration is considering feelings across people in the system about the desired/preferred relationship between humans and AI in the system. How do we interact through our divergence and toward convergence, on what?

What is the relationship of AI to human control? Do we want AI to perform faultless execution of some of the algorithms we set, such as in circumstances where there is no need for arbitration or subtle/complex human capacities to be engaged (e.g. administration, information management, etc.)? And do we want humans to be involved when arbitration is necessary (e.g. curating, evaluating/analyzing, ethics, etc.?)