Snapchat
announced
new
artificial
intelligence
(AI)
tools
for
users
on
Tuesday.
The
social
media
giant
revealed
during
its
6th
annual
Snap
Partner
Summit
that
it
is
planning
to
introduce
an
AI
video
tool
for
those
with
a
creator
account.
The
AI
tool
will
allow
users
to
generate
videos
from
text
and
image
prompts.
All
videos
generated
using
AI
are
said
to
be
watermarked
by
the
company
to
ensure
that
other
users
can
differentiate
real
videos
from
AI-generated
ones.
In
a
press
release,
the
social
media
company
detailed
the
new
features.
The
AI
Video
tool
is
among
the
most
exciting
features
announced
during
the
event.
Dubbed
Snap
AI
Video,
it
will
be
available
only
to
Creators
on
the
platform.
Notably,
to
become
a
Creator,
users
should
have
a
public
profile,
must
be
active
posters
to
their
Stories
and
Spotlight
as
well
as
have
a
sizeable
audience.
The
feature
appears
to
be
similar
to
a
typical
AI
video
generator
and
can
generate
videos
from
text
prompts.
Snapchat
said
soon,
creators
will
be
able
to
generate
videos
from
image
prompts
as
well.
The
feature
has
been
rolled
out
in
beta
on
the
web
for
a
select
group
of
creators.
A
company
spokesperson
told
TechCrunch
that
the
AI
feature
is
powered
by
Snap’s
in-house
foundational
video
models.
Once
the
feature
becomes
widely
available,
the
company
also
plans
to
use
icons
and
context
cards
to
let
users
know
when
a
Snap
was
made
using
AI.
A
specific
watermark
will
remain
visible
even
when
the
content
is
downloaded
or
shared.
The
spokesperson
also
told
the
publication
that
the
video
models
have
been
thoroughly
tested
and
underwent
safety
evaluations
to
ensure
they
do
not
generate
any
harmful
content.
Apart
from
this,
Snapchat
also
released
a
new
AI
Lens
that
lets
users
appear
as
their
elderly
selves.
Snapchat
Memories,
which
is
available
to
Snapchat+
subscribers
will
now
support
AI
captions
and
Lenses.
Further,
My
AI,
the
company’s
native
chatbot,
is
also
getting
improvements
and
can
perform
several
new
tasks.
Snapchat
says
users
can
now
solve
more
complex
problems,
interpret
parking
signs,
translate
menus
in
foreign
languages,
identify
unique
plants,
and
more
with
My
AI.
Finally,
the
company
is
also
partnering
with
OpenAI
to
give
developers
access
to
multimodal
large
language
models
(LLMs)
to
let
them
create
more
Lenses
that
recognise
objects
and
provide
more
context.