Baanboard.com

Go Back   Baanboard.com > News

User login

Frontpage Sponsor

Main

Poll
How big is your Baan-DB (just Data AND Indexes)
0 - 200 GB
16%
200 - 500 GB
28%
500 - 800 GB
2%
800 - 1200 GB
9%
1200 - 1500 GB
9%
1500 - 2000 GB
14%
> 2000 GB
21%
Total votes: 43

Baanboard at LinkedIn


Reference Content

 
RSS Newsfeeds

Comic for February 15, 2019

Dilbert - February 16, 2019 - 12:59am
Categories: Geek

Researchers, scared by their own work, hold back “deepfakes for text” AI

Ars Technica - 48 min 14 sec ago

Enlarge / This is fine.

OpenAI, a non-profit research company investigating "the path to safe artificial intelligence," has developed a machine learning system called Generative Pre-trained Transformer-2 (GPT-2 ), capable of generating text based on brief writing prompts. The result comes so close to mimicking human writing that it could potentially be used for "deepfake" content. Built based on 40 gigabytes of text retrieved from sources on the Internet, GPT-2 generates plausible "news" stories and other text that match the style and content of a brief text prompt.

The performance of the system was so disconcerting, now the researchers are only releasing a reduced version of GPT-2 based on a much smaller text corpus. In a blog post on the project and this decision, researchers Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever wrote:

Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights. Nearly a year ago we wrote in the OpenAI Charter: “we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research,” and we see this current work as potentially representing the early beginnings of such concerns, which we expect may grow over time. This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas.

OpenAI is funded by contributions from a group of technology executives and investors connected to what some have referred to as the PayPal "mafia"—Elon Musk, Peter Thiel, Jessica Livingston, and Sam Altman of YCombinator, former PayPal COO and LinkedIn co-founder Reid Hoffman, and former Stripe Chief Technology Officer Greg Brockman. Brockman now serves as OpenAI's CTO. Musk has repeatedly warned of the potential existential dangers posed by AI, and OpenAI is focused on trying to shape the future of artificial intelligence technology—ideally moving it away from potentially harmful applications.

Read 6 remaining paragraphs | Comments

Facebook, Google, CDC under pressure to stop anti-vax garbage from spreading

Ars Technica - 1 hour 53 min ago

Enlarge (credit: ROBYN BECK/AFP/Getty Images)

With five measles outbreaks ongoing in the US, lawmakers are questioning both health officials and tech giants on their efforts to combat the noxious anti-vaccine misinformation fueling the spread of disease.

Last week, Lamar Alexander (R-Tenn.), chairman of the Senate health committee, along with ranking member Patty Murray (D-Wash.) sent a letter to the Centers for Disease Control and Prevention and Health and Human Services. The lawmakers asked what health officials were doing to fight misinformation and help states dealing with outbreaks. “Many factors contribute to vaccine hesitancy, all of which demand attention from CDC and [HHS’ National Vaccine Program Office],” the lawmakers wrote. On Thursday, February 14, the committee announced that it will hold a hearing on the subject on March 5.

Also Thursday, Rep. Adam Schiff (D-Calif.) sent letters to Google CEO Sundar Pichai and Facebook CEO Mark Zuckerberg. In them, Schiff expressed concern over the outbreaks as well as the tech companies’ role in enabling the dissemination of medically inaccurate information.

Read 10 remaining paragraphs | Comments

Huge study finds professors’ attitudes affect students’ grades

Ars Technica - 1 hour 58 min ago

Enlarge (credit: nikolayhg)

“You just have to believe!” is the kind of trite line you’d expect in a kids’ movie about a magic talking dog. But it seems the phrase doubles as important advice for college professors. That’s the upshot of a huge study at Indiana University, led by Elizabeth Canning, where researchers measured the attitudes of instructors and the grades their students earned in classes.

Mind the gap

One of the disappointing problems in higher education is the frequent existence of an “achievement gap” between underrepresented minorities and other students. It seems to be the result of various obstacles that students face along the way, from stereotypes about which groups are naturally skilled in which fields, to cultural differences that make some students hesitant to seek help in a class, to a lack of advantages in primary and secondary education. A lot of things can get in the way.

So these scenarios don’t have to take the ugly form of a racist teacher outright telling a student they aren’t welcome. Many issues are unintentional and subtle. If a student has the perception, for any reason, that they aren’t expected to succeed, that can drain enough motivation to ensure that they don’t.

Read 9 remaining paragraphs | Comments


All times are GMT +2. The time now is 22:59.


©2001-2018 - Baanboard.com - Baanforums.com