Artificial Intelligence News

How should AI systems behave, and who should decide?


In pursuing our mission, we are committed to ensuring that access to, benefit from, and influence over AI and AGI is widespread. We believe there are at least three building blocks needed to achieve this goal in the context of behavioral AI systems.(^ scope)

1. Improve default behavior. We want as many users as possible to feel that our AI systems work for them “out of the box” and feel that our technology understands and respects their values.

Toward that end, we invest in research and engineering to mitigate both glaring and subtle biases in how ChatGPT responds to different inputs. In some cases, ChatGPT currently rejects output when it shouldn’t, and in some cases, it doesn’t reject when it should. We believe that improvements in both respects are possible.

In addition, we have room for improvement in other dimensions of system behavior such as “making it up” systems. Feedback from users is invaluable for making these improvements.

2. Determine the value of your AI, in broad terms. We believe that AI should be a useful tool for the individual, and thus adaptable by each user to the limits set by society. Therefore, we are developing an upgrade to ChatGPT to allow users to easily customize their behavior.

This means allowing system output that others (including ourselves) might strongly disapprove of. Striking the right balance here will be challenging–taking customization to extremes would risk enabling our insidious use of technology and sycophantic AI that mindlessly amplifies people’s existing beliefs.

Because of that there will always be some limitations on the behavior of the system. The challenge is defining what those boundaries are. If we tried to make all of these decisions ourselves, or if we tried to develop one monolithic AI system, we would fall short of the commitment we made in our Charter to “avoid undue concentration of power.”

3. Public input on defaults and hard limits. One way to avoid undue concentration of power is to give people who use or are affected by systems like ChatGPT the ability to influence the rules of those systems.

We believe that many decisions about our defaults and limits should be made collectively, and while practical implementation is a challenge, we aim to include as many perspectives as possible. As a starting point, we seek external input to our technology in forms red team. We also recently started seeking public input on AI in education (one of the very important contexts in which our technology is used).

We are in the early stages of piloting efforts to solicit public input on topics such as system behavior, disclosure mechanisms (such as watermarking), and our broader enforcement policies. We are also exploring partnerships with external organizations to conduct third-party audits of our safety efforts and policies.



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button