Companies commit to 'protecting' humanity from Artificial Intelligence; see which ones

protection click fraud

The White House confirmed, last week, that Adobe, IBM, Palantir, Nvidia and Salesforce, along with three other companies Artificial Intelligence (AI), adopted voluntary security and reliability standards.

These companies join already recognized names in the segment, such as Amazon, Anthropic, Google, Inflection AI, Microsoft and OpenAI, who committed to this initiative in July.

see more

Gas Aid and Bolsa Família from September begin to be paid this…

The end of winter arrives, bringing temperatures above 40ºC across the country;…

Although these standards are unregulated and not under government oversight, they represent a concerted effort to increase confidence in the evolution of technology.

The growing relevance of AI, amplified by OpenAI's launch of ChatGPT the previous year, has placed the technology at the center of attention, with notable impacts on the workforce, the spread of incorrect information and concerns about developments autonomous.

Due to such challenges, Washington has seen intense debates among legislators, regulators and entrepreneurs in the field about how to manage and guide the progress of AI.

instagram story viewer

Tech giants will testify before the Senate on AI regulations

In a series of events aimed at regulating AI, Brad Smith of Microsoft and William Dally of Nvidia, are scheduled to testify at a hearing before the Senate subcommittee on privacy, technology and law.

In a few days, a high-level meeting will take place between lawmakers at an AI event hosted by New York Democratic Senator Chuck Schumer and the following entrepreneurs:

  • Elon Musk (SpaceX, Tesla, among others);

  • Mark Zuckerberg (Meta);

  • Sam Altman (OpenAI);

  • Sundar Pichai (Google).

Jeff Zients, White House chief of staff, responded to the growing commitment from technology companies, stating that “the president was clear: harness the benefits of AI, manage the risks and move forward quickly. And it is this commitment that we are demonstrating in partnership with the private sector.”

(Image: disclosure)

The companies, recognizing their responsibility, agreed to:

  • Evaluate future products for potential safety risks;

  • Introduce watermarks to identify AI-generated content;

  • Share information about security threats;

  • Report trends observed in your systems.

US government surveillance in the face of security challenges

The accelerated growth of AI in recent decades has provoked a series of debates on a global scale.

At the epicenter of these discussions, the United States, a technological powerhouse, has shown concern about the impacts of this technology.

The American government, upon identifying the transformative potential of AI, has turned its attention more closely to the security implications arising from such a technological revolution.

The concern is not just limited to data protection and privacy, but extends to the integrity of systems critical issues, potential threats to national infrastructure and the possibility of misuse of AI in scenarios conflictual.

Such recognition reflects the urgency in establishing clear guidelines and robust regulations that guarantee the safe and ethical development of Artificial intelligence.

Teachs.ru

ALERT for those who have a Google account: it may be deleted soon!

O Google announced that from December 1st this year it will exclude the inactive accounts registe...

read more

Black Friday Ultragaz: see how to buy cooking gas up to R$50 cheaper

A sexta-feira Negra is arriving and, with it, companies are announcing numerous discounts for the...

read more

Do you have one of the 7 rarest surnames in Brazil?

We all share some characteristics in common, but we also have others that differentiate us from e...

read more
instagram viewer