If you use Citrix NetScaler for secure remote access to your Citrix XenApp/Citrix XenDesktop deployment, you may be wondering if there’s more that it can do. You are correct! NetScaler also offers load balancing, global server load balancing, web interface integration, HDX traffic inspection and much more. It can enhance Citrix ShareFile StorageZones and Citrix mobile deployments. Join this session for a quick NetScaler refresher.
2. I’m
not
an
analyst
or
blogger,
I’ve
worked
on
the
customer
side
as
a
sysadmin,
engineer
and
even
a
developer
(I
can’t
code
anything
well!).
More
recently
I
have
been
on
the
partner
side
and
generally
handle
architec@ng
large
vdi
deployments
and
a
number
of
other
things.
First
used
PVS
as
a
Citrix
customer
in
2008
Am
currently
an
architect
and
I
implement
solu@ons
u@lizing
PVS
Many
of
my
deployments
are
over
5,000
concurrent
seats
I’m
not
pitching
Citrix,
I’m
pitching
stuff
that
works.
Also
I’ve
aOended
partner
breakouts
and
felt
I
was
in
an
infomerical
being
pitched
something.
This
is
not
one
of
those
presenta@ons.
2
3. XenApp
is
a
great
argument,
but
with
the
newest
XenDesktop
version,
it
isn’t
much
of
a
reason
today.
Nevertheless,
not
everyone
gets
to
upgrade
immediately
and
for
that
reason,
it
does
s@ll
make
sense.
Storage
concerns
are
a
huge
issue.
Did
you
know
that
if
your
storage
doesn’t
support
na@ve
thin-‐provisioning
or
deduplica@on
(also
known
as
combina@on
but
I
digress),
then
you
really
will
not
have
a
storage
savings.
Furthermore,
designed
correctly,
PVS
will
be
able
to
deliver
beOer
performance
than
MCS.
If
you’re
not
using
XenServer
then
you
should
consider
PVS.
XenServer
+
MCS
should
be
whispering
Intellicache
in
your
mind.
If
you’re
not
using
XenServer
than
you
can’t
really
use
Intellicache,
but
you
can
obtain
similar
performance
with
PVS.
What?
You’re
convinced
that
vmware’s
CBRC
(content
based
read
cache)
will
solve
your
issue?
Then
my
next
point
is
that…
Scale
is
the
biggest
factor.
I
bet
you
thought
I
was
going
to
say
MCS
couldn’t
scale.
I’m
not!
It
can
support
large
numbers,
even
over
5,000
seats
which
is
usually
where
I
recommend
XenDesktop
over
Vmware
View.
The
reason
I
do
this
is
because
of
updates.
Have
you
ever
tried
to
update
a
large
view
or
Citrix
MCS
deployment?
I
hope
you
have
a
movie
to
watch
or
two
because
off
a
single
image,
it
takes
quite
a
long
@me.
Had
you
used
PVS,
a
reboot,
probably
staggered,
is
really
all
you
needed.
So
the
next
ques@on
is…why
do
I
care?
3
4. Upda@ng
the
image
is
really
the
key
for
PVS,
this
beats
out
almost
all
other
arguments
for
me.
Demos
and
POCs
won’t
show
the
pain
you
will
encounter
once
you
scale
out.
Recomposing,
to
borrow
the
vmware
term,
involves
rebuilding
the
image
and
propaga@ng
it.
This
can
add
up
VERY
quickly
into
some
unacceptable
update
@mes.
Add
to
this
the
inability
to
quickly
recover
or
to
add
that
last
minute
update
you
forgot.
How
are
you
dealing
with
normal
update
cycles?
Do
you
assume
you’ll
never
update
the
image?
Good
luck
with
that.
The
excep@on
involves
a
low
desktop
to
vdisk
ra@o.
If
I
have
500
desktops
but
use
10
images
a
recompose
isn’t
going
to
kill
me.
Furthermore,
if
I
have
mul@ple
pools
to
help
divide
the
work
this
also
helps.
The
issue
with
that
method
though
invalidates
the
simplicity
MCS
offers
in
the
first
place.
You
now
have
mul@ple
images
to
update.
4
5. A
high
level
overview
of
a
PVS
setup
PVS
is
database
driven
(btw,
we
usually
enable
offline
mode,
disabled
by
default,
in
produc@on
environments).
You
need
to
make
sure
SQL
is
setup
well.
The
PVS
Server
holds
the
gold
image
on
a
data
store
which
generally
is
a
read-‐only
copy
of
an
OS
image
(think
the
C:
drive).
A
Target
Device
is
a
virtual
or
physical
machine
(usually
a
VM)
that
oien
is
really
a
placeholder
or
shell
for
the
streamed
C:
drive
or
gold
image.
I
generally
add
a
D:
drive
(a
write
cache).
A
target
device
has
no
C:
drive
and
must
have
a
NIC
that
can
PXE
boot.
We
usually
send
the
target
a
bootstrap
file
through
DHCP
&
PXE
that
tells
it
to
download
a
TFTP
BIN
file.
It
loads
the
BIN
file
and
runs
it,
the
BIN
pulls
in
the
C:
drive
from
the
PVS
server
over
the
network
and
boot
proceeds
normally.
If
a
D:
drive
is
present
(and
a
few
other
steps)
it
will
place
all
the
writes
on
the
D:
drive,
otherwise
it
needs
to
put
them
somewhere
else
(to
be
con@nued!)
5
6. For
HA
we
should
always
add
another
PVS
server
with
a
SEPARATE
vdisk
store
(you
can
mix
SAN/local
disk,
etc
here)
If
we
leave
DHCP
alone
we
add
a
point
of
failure
where
target
devices
may
fail
to
boot.
You
can
use
2008
R2
or
2012
to
provide
split
scope
or
u@lize
a
more
redundant
solu@on
such
as
bluecat
or
infoblox.
PXE
and
TFTP
is
another
point
of
HA
concern,
you
can
only
provide
true
HA
with
a
hardware
load
balancer.
I
oien
do
NOT
provide
HA
for
TFTP
but
if
you
have
a
hardware
load
balancer
there
is
no
reason
not
to.
PXE
will
load
the
bootstrap
which,
if
not
specified
with
you
PVS
servers,
won’t
work
(you
need
to
add
them)
Use
mirroring
with
SQL
if
you
can.
It’s
great
and
clustering
doesn’t
really
prevent
you
from
dealing
with
issues
such
as
the
storage
failing!
If
your
storage
will
never
ever
fail
then
that’s
awesome
but
keep
in
mind
I
can
use
local
storage
and
mirroring
and
preOy
much
get
the
same
benefits,
well
except
for
the
feeling
of
spending
tons
of
money.
Clustering
helps
update
SQL
nodes
one
at
a
@me
while
keeping
SQL
up,
this
generally
is
not
something
I
do,
but
I
do
recommend
mirroring.
Mirroring
requires
a
witness
server,
a
3rd
server
that
doesn’t
do
anything
other
than
help
with
the
quorum
(sql
deciding
what
server
is
primary).
If
you
set
this
up
and
lose
a
secondary
and
a
witness,
the
primary
will
stop.
I
oien
put
my
witness
on
a
local
disk.
6
7. Personally
I
think
going
with
Centralized
is
not
a
good
idea.
CIFS
sucks
performance
wise
and
you
need
to
realize
where
the
data
lives.
Is
it
off
a
NAS
head
on
a
SAN?
CIFS
requires
a
lot
of
processing,
some
vendors
have
even
started
removing
it
while
providing
NFS.
Speaking
of
“weirdness”
this
can
come
from
Centralized
also
and
is
really
a
result
of
HA.
PVS
s@ll
doesn’t
seem
that
“smart”
for
new
image
crea@on
or
for
versioning
some@mes.
Oien
your
best
bet
is
to
shut
down
the
other
server
(for
two-‐node
clusters)
or
Much
of
this
slide
data
is
from
SUM305
from
2012
(Gareth
O’Brien)
7
8.
You
ALWAYS
want
to
cache
on
the
device
hard
drive,
your
write
IOPS
are
at
the
device.
Server
based
will
send
the
writes
over
the
network
and
just
add
overhead
and
latency
RAM
is
preOy
cool
but
you’ve
got
to
size
that
correctly,
or
you
risk
filling
up
the
cache.
It
is
as
fast
as
your
memory
so
you
should
play
with
it
if
you
get
the
chance
8
9. PVS
will
place
the
page
file
on
the
first
disk
other
than
C:
that
is
NTFS
if
it
fits.
So
if
you
size
a
5GB
cache
and
have
a
3GB
page,
you
get
less
than
2GB
for
cache.
Sizing
the
page
file
is
beyond
this
talk
but
you
want
to
size
them
correctly
Reference
for
Page
Files
-‐
hOp://blogs.citrix.com/2011/12/23/the-‐pagefile-‐done-‐
right/
Some
great
blogs
out
there
on
sizing:
My
personal
favorite
and
I
think
he
provides
a
great
explana@on
is
Paul
Wilson
hOp://virtualiza@onjedi.com/2012/10/02/determining-‐the-‐size-‐of-‐your-‐provisioning-‐
services-‐write-‐cache/
Kenny
Baldwin
from
iVision
in
Atlanta
has
a
great
script
that
will
monitor
PVS
cache
sizes
over
70%
and
send
an
alert.
Haven’t
used
it
yet
because
he
posted
it
today
hOp://desktopsandapps.com/2013/05/23/pvs-‐write-‐cache-‐monitor/
9
10. If
you
can,
put
DHCP
on
the
PVS
server.
You’re
putng
the
service
on
the
server
that
needs
to
use
it.
This
is
important
if
you
use
a
dual-‐nic,
isolated
network
as
whatever
you
use
for
DHCP
won’t
reach
the
network.
In
this
case
though,
if
you’re
on
an
AD
domain,
you’ll
need
domain
admin
access
to
authorize
a
new
DHCP
server,
even
on
an
isolated
network.
If
that’s
not
going
to
happen,
you
“could”
do
some
freeware
DHCP
servers
but
I’d
steer
away
from
them
in
produc@on.
10
11. Dual
NICs
make
sense
for
1GE
or
slower
NICs.
You
want
an
isolated
network
when
you
have
PXE
conflicts
on
the
main
network
also,
perhaps
LANDesk
is
conflic@ng?
If
you
use
hyper-‐V
you
will
most
likely
use
two
NICS.
You’re
stuck
doing
a
PXE
from
a
legacy
adapter
which
is
100Mbps.
Although
some
say
this
is
usually
sufficient
or
it’s
use
a
label
but
not
limited,
for
produc@on
I
always
assume
it’s
too
slow
and
labelled
correctly.
You
would
then
add
a
second
Enhanced
NIC
that
does
everything
else.
This
setup
obviously
lends
well
to
an
isolated
PVS
VLAN
setup.
Defines
which
NIC
to
use
for
IPC
communica@on
in
a
mul@
NIC
environment
HKEY_LOCAL_MACHINESoiwareCitrixProvisioningServicesIPC
Reg_sz
called
IPv4Address
with
the
IP
of
the
NIC
for
IPC
Without
it,
stores,
replica@on,
load
balancing
etc
won’t
work
Affects
stream
service
Manager
key
for
MAPI
works
the
same
way
HKEY_LOCAL_MACHINESOFTWARECitrixProvisioningServicesManager
RegSZ
called
GeneralInetAddr
with
the
IP
of
the
NIC
and
port
Eg
10.1.1.2:6909
BTW,
both
keys
usually
are
the
NIC
you
are
using
for
PVS
streaming.
Also
you
can
actually
bypass
PXE
and
use
the
Boot
Device
Manager,
BDM
can
burn
an
ISO
or
write
to
the
disk
itself.
It’s
not
a
bad
op@on
but
generally
I
use
PXE.
11
12. Versioning
is
a
fantas@c
addi@on
to
PVS,
it
was
introduced
in
version
6.
It
is
simply
snapshot
for
your
vdisks
I
use
versioning
all
the
@me
but
when
I
make
major
updates
I’ll
make
a
full
copy.
Not
a
bad
prac@ce
just
in
case
something
gets
corrupted.
You
have
to
keep
an
eye
on
how
deep
the
versions
get,
I
almost
never
go
past
7
deep.
Too
make
versions
will
affect
performance.
12
13. Versioning
on
PVS
with
HA
can
be
tricky.
You
should
disable
anything
that
is
automa@cally
copying
disks
to
the
other
stores
when
you
create
a
new
version
since
it
is
writeable.
Obviously
once
you
are
done
and
seal
the
version
(promo@ng
it
to
test
or
produc@on)
you
should
copy
it
(again
AFTER
promo@ng
it)
to
the
other
stores.
Some@mes
the
maintenance
version
is
placed
on
the
other
PVS
server,
in
this
case
you
may
want
to
use
an
ISO
to
boot,
shut
the
stream
service
down
on
one
or
move
the
file
or
even
start
over.
13
14. The
bootstrap
for
TFTP
lists
your
failover
servers,
this
is
true
from
both
ISO
and
DHCP
boots
so
you
need
to
list
them
all,
otherwise
failover
will
not
occur.
This
is
NOT
HA,
it’s
failover
You
always
want
to
make
sure
the
guidelines
are
followed
for
the
NIC
setup,
most
notably
disabling
TCP
Offload.
If
you
use
DFS-‐R,
do
not
use
the
read-‐only
mode,
just
don’t
use
it
however
temp@ng
it
may
be.
14
15. You
can
disable
the
boot
menu
for
maintenance
target
devices.
If
you
didn’t
know,
when
you
boot
a
target
device
in
maintenance
mode,
it
will
prompt
you
on
boot
as
to
which
vdisk
version
you
would
like
to
use.
This
is
an
issue
if
you
weren’t
prepared
to
use
the
console
of
the
machine.
There
is
a
way
around
this
however!
Set
the
skipbootmenu
registry
value
Don’t
be
scared
of
the
advanced
setngs,
the
remote
and
local
concurrent
I/O
limits
can
be
set
to
higher
than
the
default
4
if
you
have
fast
disks.
If
you
have
very
fast
disks,
you
can
eliminate
the
limit
by
setng
it
to
0
Add
Network
service
to
vdisk
security
setngs
if
you
have
“can’t
read
from
disk”
errors.
Also
SPNs
for
service
acounts
-‐-‐-‐-‐NOTES-‐-‐-‐-‐-‐-‐
Add
more
on
advanced
setngs
15
16. 1) HA
topology
2) vDISK
proper@es
including
target
devices
3) Versioning
16