Skip to content

Amazon will offer human benchmarking teams to test AI models

Noah Berger

Amazon wants users to evaluate AI models better and encourage more humans to be involved in the process.

During the AWS re: Invent conference, AWS vice president of database, analytics, and machine learning Swami Sivasubramanian announced Model Evaluation on Bedrock, now available on preview, for models found in its repository Amazon Bedrock. Without a way to transparently test models, developers may end up using ones that are not accurate enough for a question-and-answer project or one that is too large for their use case.

“Model selection and evaluation is not just done at the beginning, but is something that’s repeated periodically,” Sivasubramanian said. “We think having a human in the loop is important, so we are offering a way to…

Continue reading…

Leave a Reply

Your email address will not be published. Required fields are marked *