> Among other achievements, it has drafted an op-ed that was commissioned by
> The Guardian,

So, what happened here is that eight different opt-eds were produced by
GPT-3; they were all kept short, and this was deliberate, because one of the
fundamental and unsolved issues with artificial text generation is its
inability to make sense over longer bodies of text; any given sentence is
fine, a couple of sentences usually fine, something longer is problematic -
and always will be, I suspect, because you'd need such a vast amount of
content, to be able to develop a neural net which has seen enough material
on enough subjects to be able to fake it for extended bodies of text, that
it is impossible - that much content doesn't actually exist.  It's a sort of
n^n problem.  You end up needing an *awful* lot more data and computational
power just to move ahead a tiny bit.

Of these eight documents, the editors at the Guardian then edited them all,
as they saw fit, to produce the single document which was published.

I may be wrong, but I suspect they took the most sane paragraphs from the
eight attempts, fixed them up, and re-ordered them to make sense.

If you're thinking this whole piece is the *direct* product of a text
generator, it really isn't, and the areas where humans helped are exactly
the areas where the method used is fundamentally and inherently weak.

> written news stories that a majority of readers thought were written by
> humans,

This claim is backed up by a link to an arxiv white paper.

In the white paper, various AI models (of increasing size, culminating in
GPT-3) were given an original 200 or so word news piece written by a human
and asked to generate text based on this primer.  The generated text was
presented to the humans, who had to decide if it was human or AI written.

I may well just not be seeing it, but all I can see is the claim that as the
size of the model increases, the time taken to decide increases, and the
success rate drops.  No actual numbers appear to be given.

As before, short text is being used because of the fundamental and inherent
difficulty in producing longer texts.

> and devised new Internet memes.

  This claim is backed up by a link to a tweet.  The tweet appears to show
in a video of sequential still images a series of short, one or two word
phases, submitted to GPT-3 by some guy, and its response.  The only other
information about what was done is that "explaining the meme in the priming
improves the consistency/quality".  Presumably also these represent the best
results found, as selected by a human.

> In light of this breakthrough, we consider a simple but important question:
> can automation generate content for disinformation campaigns?

Examining the claims made so far, there has been no breakthrough.

I've not read the document published by the Center for Security and Emerging
Technology.  It may be it is a well-balanced, rational and reasonable
document.  However, this one paragraph, being more closely examined, appears
to be sensationalism; the claims made are misleading, and seem far in excess
of the basis upon which they are made.

By admin