AI Now Institute
FoundedNovember 15, 2017 (2017-11-15)
FoundersKate Crawford,
Meredith Whittaker
Type501(c)(3) Nonprofit organization
Location
Coordinates40°44′06″N 73°59′41″W / 40.7350°N 73.9948°W / 40.7350; -73.9948
Websitewww.ainowinstitute.org

The AI Now Institute (AI Now) is an American research institute studying the social implications of artificial intelligence and policy research that addresses the concentration of power in the tech industry.[2][3][4] AI Now has partnered with organizations such as the Distributed AI Research Institute (DAIR), Data & Society, Ada Lovelace Institute, New York University Tandon School of Engineering, New York University Center for Data Science, Partnership on AI, and the ACLU. AI Now has produced annual reports that examine the social implications of artificial intelligence. In 2021-2, AI Now’s leadership served as a Senior Advisors on AI to Chair Lina Khan at the Federal Trade Commission.[5] Its executive director is Amba Kak.[6][7]

Founding and mission

AI Now grew out of a 2016 symposium spearheaded by the Obama White House Office of Science and Technology Policy. The event was led by Meredith Whittaker, the founder of Google's Open Research Group, and Kate Crawford, a principal researcher at Microsoft Research.[8] The event focused on near-term implications of AI in social domains: Inequality, Labor, Ethics, and Healthcare.[9]

In November 2017, AI Now held a second symposium on AI and social issues, and publicly launched the AI Now Institute in partnership with New York University.[8] It is claimed to be the first university research institute focused on the social implications of AI, and the first AI institute founded and led by women.[1] It is now a fully independent institute.

In an interview with NPR, Crawford stated that the motivation for founding AI Now was that the application of AI into social domains - such as health care, education, and criminal justice - was being treated as a purely technical problem. The goal of AI Now's research is to treat these as social problems first, and bring in domain experts in areas like sociology, law, and history to study the implications of AI.[10]

Research

AI Now publishes an annual reports on the state of AI, and its integration into society. Its 2017 Report stated that, "current framings of AI ethics are failing", and provided ten strategic recommendations for the field - including pre-release trials of AI systems, and increased research into bias and diversity in the field. The report was noted for calling for an end to "black box" systems in core social domains, such as those responsible for criminal justice, healthcare, welfare, and education.[11][12][13]

In April 2018, AI Now released a framework for algorithmic impact assessments (AIA Report Archived 2020-06-14 at the Wayback Machine), as a way for governments to assess the use of AI in public agencies. According to AI Now, an AIA would be similar to environmental impact assessment, in that it would require public disclosure and access for external experts to evaluate the effects of an AI system, and any unintended consequences. This would allow systems to be vetted for issues like biased outcomes or skewed training data, which researchers have already identified in algorithmic systems deployed across the country.[14][15][16]

Its 2023 Report[17] argued that meaningful reform of the tech sector must focus on addressing concentrated power in the tech industry.[18]

References

  1. 1 2 "New Artificial Intelligence Research Institute Launches". NYU Tandon News. 2017-11-25. Retrieved 2018-07-07.
  2. "About Us". AI Now Institute. Retrieved 2023-05-12.
  3. "The field of AI research is about to get way bigger than code". Quartz. 2017-11-15. Retrieved 2018-07-09.
  4. "Biased AI Is A Threat To Civil Liberties. The ACLU Has A Plan To Fix It". Fast Company. 2017-07-25. Retrieved 2018-07-07.
  5. "FTC Chair Lina M. Khan Announces New Appointments in Agency Leadership Positions". Federal Trade Commission. 2021-11-19. Retrieved 2023-05-12.
  6. "Amazon, Google, Meta, Microsoft and other tech firms agree to AI safeguards set by the White House". AP News. 2023-07-21. Retrieved 2023-07-21.
  7. "People". AI Now Institute. Retrieved 2023-07-21.
  8. 1 2 Ahmed, Salmana. "In Pursuit of Fair and Accountable AI". Omidyar. Retrieved 19 July 2018.
  9. "2016 Symposium". ainowinstitute.org. Archived from the original on 2018-07-20. Retrieved 2018-07-09.
  10. "Studying Artificial Intelligence At New York University". NPR. Retrieved 2018-07-18.
  11. "AI Now 2017 Report". AI Now. 18 October 2017. Retrieved 19 July 2018.
  12. Simonite, Tom (18 October 2017). "AI Experts Want to End 'Black Box' Algorithms in Government". Wired. Retrieved 19 July 2018.
  13. Rosenberg, Scott (1 November 2017). "Why AI is Still Waiting For Its Ethics Transplant". Wired. Retrieved 19 July 2018.
  14. Gershgorn, Dave (9 April 2018). "AI experts want government algorithms to be studied like environmental hazards". Quartz. Retrieved 19 July 2018.
  15. "AI Now AIA Report" (PDF). AI Now. Archived from the original (PDF) on 14 June 2020. Retrieved 19 July 2018.
  16. Reisman, Dillon (16 April 2018). "Algorithms Are Making Government Decisions. The Public Needs to Have a Say". Medium. ACLU. Retrieved 19 July 2018.
  17. "2023 Landscape". AI Now Institute. Retrieved 2023-05-16.
  18. Samuel, Sigal (2023-04-12). "Finally, a realistic roadmap for getting AI companies in check". Vox. Retrieved 2023-05-16.

See also

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.