Relates to public-facing chatbots and provides definitions. Requires a deployer to implement and maintain protocols meant to detect, respond to, report, and mitigate harm the deployer's public chatbot may cause a user in a manner that takes commercially reasonable steps to protect the well-being of users; limit the collection and storage of user information collected by a public chatbot to what is necessary to fulfill the deployer's purpose for making the public chatbot publicly available; clearly and conspicuously disclose the public chatbot and not a licensed medical, legal, financial, or mental health professional at the beginning of the public chatbot's interaction with a user and at three-hour intervals of continuous interaction with the user; and implement protocols to respond to user prompts indicating the user has suicidal ideations or the intent to cause self-harm. Includes other requirements. Allows AG to bring enforcement actions and sets penalties up to $2,500 or $7,500 for violating injunctions.