AI Governance and Policy

AI Governance and Policy

What kinds of work might contribute to AI governance?

There are a variety of ways to pursue AI governance strategies, and as the field becomes more mature, the paths are likely to become clearer and more established.

We generally don’t think people early in their careers should aim for a specific high-impact job. They should instead aim to develop skills, experience, knowledge, judgement, networks, and credentials — what we call career capital — that they can use later to have an impact.

This may involve following a standard career trajectory or moving around in different kinds of roles. Sometimes, you just have to apply to many different roles and test your fit for various types of work before you know what you’ll be good at. Most importantly, you should try to get excellent at something for which you have strong personal fit and that will let you contribute to solving pressing problems.AI Governance | Info-Tech Research Group

In the AI governance space, we see at least six broad categories of work that we think are important:

Thinking about the different kinds of career capital that are useful for the categories of work that appeal to you may suggest some next steps in your path. (We discuss how to assess your fit and enter this field below.)

You may want to move between these different categories of work at different points in your career. You can also test out your fit for various roles by taking internships, fellowships, entry-level jobs, temporary placements, or even doing independent research, all of which can serve as career capital for a range of paths.

We have also reviewed career paths in AI technical safety research and engineeringinformation security, and AI hardware expertise, which may be crucial to reducing risks from AI. These fields may also play a significant role in an effective governance agenda. People serious about pursuing a career in AI governance should familiarise themselves with these subjects as well.AI Governance Strategy, Framework & Best Practices: The Ultimate Guide

Government work

Taking a role within an influential government could help you play an important role in the development, enactment, and enforcement of AI policy.

We generally expect that the US federal government will be the most significant player in AI governance for the foreseeable future. This is because of its global influence and its jurisdiction over much of the AI industry, including the most prominent AI companies such as Anthropic, OpenAI, and Google DeepMind. It also has jurisdiction over key parts of the AI chip supply chain. Much of this article focuses on US policy and government.2

But other governments and international institutions matter too. For example, the UK government, the European Union, China, and others may present opportunities for impactful AI governance work. Some US state-level governments, such as California, may have opportunities for impact and gaining career capital.

What would this work involve? Sections below discuss how to enter US policy work and which areas of the government that you might aim for.

In 2023, the US and UK governments both announced new institutes for AI safety — both of which should provide valuable opportunities for career capital and potential impact.

But at the broadest level, people interested in positively shaping AI policy should gain skills and experience to work in areas of government with some connection to AI or emerging technology policy.

This can include roles in: legislative branches, domestic regulation, national security, diplomacy, appropriations and budgeting, and other policy areas.

If you can get a role already working directly on this issue, such as in one of the AI safety institutes or working for a lawmaker focused on AI, that could be a great opportunity.

Otherwise, you should seek to learn as much as you can about how policy works and which government roles might allow you to have the most impact. Try to establish yourself as someone who’s knowledgeable about the AI policy landscape. Having almost any significant government role that touches on some aspect of AI, or having some impressive AI-related credential, may be enough to go quite far.

One way to advance your career in government on a specific topic is what some call “getting visibility.” This involves using your position to learn about the landscape and connect with the actors and institutions in the policy area. You’ll want to engage socially with others in the policy field, get invited to meetings with other officials and agencies, and be asked for input on decisions. If you can establish yourself as a well-regarded expert on an important but neglected aspect of the issue, you’ll have a better shot at being included in key discussions and events.

AI Governance Framework - Artificial Intelligence Governance And Auditing

Career trajectories within government can be broken down roughly as follows:

  • Standard government track: This involves entering government at a relatively low level and climbing the seniority ladder. For the highest impact, you’d ideally reach senior levels by sticking around, forming relationships, gaining skills and experience, and getting promoted. You may move between agencies, departments, or branches.
  • Specialisation career capital: You can also move in and out of government throughout your career. People on this trajectory also work at nonprofits, think tanks, the private sector, government contractors, political parties, academia, and other organisations. But they will primarily focus on becoming an expert in a topic — such as AI. It can be harder to get seniority this way, but the value of expertise can sometimes be greater than the value of seniority.
  • Direct-impact work: Some people move into government jobs without a longer plan to build career capital because they see an opportunity for direct, immediate impact. This might involve getting tapped to lead an important commission or providing valuable input on an urgent project. This isn’t necessarily a strategy you can plan a career around, but it’s good to be aware of it as an option that might be worth taking at some point.

Read more about how to evaluate your fit and get started building relevant career capital in our article on policy and political skills.AI Governance Strategy, Framework & Best Practices: The Ultimate Guide

Research on AI policy and strategy

There’s still a lot of research to be done on AI governance strategy and implementation. The world needs more concrete policies that would really start to tackle the biggest threats; developing such policies and deepening our understanding of the strategic needs of the AI governance space are high priorities.

Other relevant research could involve surveys of public and expert opinion, legal research about the feasibility of proposed policies, technical research on issues like compute governance, and even higher-level theoretical research into questions about the societal implications of advanced AI.

Some research, such as that done by Epoch AI, focuses on forecasting the future course of AI developments, which can influence AI governance decisions.

However, several experts we’ve talked to warn that a lot of research on AI governance may prove to be useless. So it’s important to be reflective and seek input from others in the field about what kind of contribution you can make. We list several research organisations below that we think pursue promising research on this topic and could provide useful mentorship.

One approach for testing your fit for this work — especially when starting out — is to write up analyses and responses to existing work on AI policy or investigate some questions in this area that haven’t received much attention. You can then share your work widely, send it out for feedback from people in the field, and evaluate how you enjoy the work and how you might contribute to this field.

But don’t spend too long testing your fit without making much progress, and note that some are best able to contribute when they’re working on a team. So don’t over-invest in independent work, especially if there are few signs it’s working out especially well for you. This kind of project can make sense for maybe a month or a bit longer — but it’s unlikely to be a good idea to spend much more than that without funding or some really encouraging feedback from people working in the field.

If you have the experience to be hired as a researcher, work on AI governance can be done in academia, nonprofit organisations, and think tanks. Some government agencies and committees, too, perform valuable research.

Note that universities and academia have their own priorities and incentives that often aren’t aligned with producing the most impactful work. If you’re already an established researcher with tenure, it may be highly valuable to pivot into work on AI governance — your position may even give you a credible platform from which to advocate for important ideas.

But if you’re just starting out a research career and want to focus on this issue, you should carefully consider whether your work will be best supported inside academia. For example, if you know of a specific programme with particular mentors who will help you pursue answers to critical questions in this field, it might be worth doing. We’re less inclined to encourage people on this path to pursue generic academic-track roles without a clear idea of how they can do important research on AI governance.

Advanced degrees in policy or relevant technical fields may well be valuable, though — see more discussion of this in the section on how to assess your fit and get started.

You can also learn more in our article about how to become a researcher.AI Governance Framework: our proven 4-step process | Collibra

Industry work

Internal policy and corporate governance at the largest AI companies themselves is also important for reducing risks from AI.

At the highest level, deciding who sits on corporate boards, what kind of influence those boards have, and the incentives the organisation faces can have a major impact on a company’s choices. Many of these roles are filled by people with extensive management and organisational leadership experience, such as founding and running companies.

If you’re able to join a policy team at a major company, you can model threats and help develop, implement, and evaluate proposals to reduce risks. And you can build consensus around best practices, such as strong information security, using outside evaluators to find vulnerabilities and dangerous behaviours in AI systems (red teaming), and testing out the latest techniques from the field of AI safety.

And if, as we expect, AI companies face increasing government oversight, ensuring compliance with relevant laws and regulations will be a high priority. Communicating with government actors and facilitating coordination from inside the companies could be impactful work.

In general, it seems better for AI companies to be highly cooperative with each other3 and with outside groups seeking to minimise risks. And this doesn’t seem to be an outlandish hope — many industry leaders have expressed concern about catastrophic risks and have even called for regulation of the frontier technology they’re creating.

That said, cooperation will likely take a lot of effort. Companies creating powerful AI systems may resist some risk-reducing policies, because they’ll have strong incentives to commercialise their products. So getting buy-in from the key players, increasing trust and information-sharing, and building a consensus around high-level safety strategies will be valuable.AI Governance Implications Of Platform Based Businesses

Advocacy and lobbying

People outside of government or AI companies can influence the shape of public policy and corporate governance with advocacy and lobbying.

Advocacy is the general term for efforts to promote certain ideas and shape the public discourse, often around policy-related topics. Lobbying is a more targeted effort aimed at influencing legislation and policy, often by engaging with lawmakers and other officials.

If you believe AI companies may be disposed to advocate for generally beneficial regulation, you might work with them to push the government to adopt specific policies. It’s plausible that AI companies have the best understanding of the technology, as well as the risks, failure modes, and safest paths — and so are best positioned to inform policymakers.

On the other hand, AI companies might have too much of a vested interest in the shape of regulations to reliably advocate for broadly beneficial policies. If that’s right, it may be better to join or create advocacy organisations unconnected from the industry — perhaps supported by donations — that can take stances opposed to commercial interests.

For example, some believe it might be best to deliberately slow down or halt the development of increasingly powerful AI models. Advocates could make this demand of the companies themselves or of the government. But pushing for this step may be difficult for those involved with the companies creating advanced AI systems.

It’s also possible that the best outcomes will result from a balance of perspectives from inside and outside industry.

Advocacy can also:

  • Highlight neglected but promising approaches to governance that have been uncovered in research
  • Facilitate the work of policymakers by showcasing the public’s support for governance measures
  • Build bridges between researchers, policymakers, the media, and the public by communicating complicated ideas in an accessible way
  • Pressure companies to proceed more cautiously
  • Change public sentiment around AI and discourage irresponsible behaviour by individual actors

However, note that advocacy can sometimes backfire because predicting how information will be received isn’t straightforward. Be aware that:

  • Drawing attention to a cause area can sometimes trigger a backlash
  • Certain styles of rhetoric can alienate people or polarise public opinion
  • Spreading mistaken messages can discredit yourself and others

It’s important to keep these risks in mind and consult with others (particularly those who you respect but might disagree with tactically). And you should educate yourself deeply about the topic before explaining it to the public.

You can read more in the section about doing harm below. We also recommend reading our article on ways people trying to do good accidentally make things worse and how to avoid them. And you may find it useful to read our article on the skills needed for communicating important ideas.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *