Uncategorized

Session sharing for vm hosted applications

That guarantees that each source will have regular opportunities to transmit the latest information. If a source has no more data to transmit, then the rest of its time slot is wasted. If it has more to send than will fit in its slot, it has to either store the excess data and transmit it in its next slot, or discard it. Note that whether messages are isochronous or asynchronous is independent of whether the transmision of individual bits is synchronous or asynchronous.

Isochronous communication suits applications where a steady data stream is more important than completeness and accuracy, e. Users consume one license for all app sessions VM or server hosted 1. Admin creates standard desktop image or vDisk with installed app 2. If a user has another application opened, then no new XenApp license is consumed. If a VM hosted app is opened but no other app is opened, then a new XenApp license is consumed.

If additional applications are opened, sessions share licenses across XenApp. XenApp Web or Services Site 4. User interacts with app remotely. Screen updates, mouse clicks and keystrokes sent between user and server. Pooled VM is booted for user. App executes within VM and remote displays to user in seamless mode. Online plugin, offline plugin, single sign-on plugin, etc. Helper apps may also be installed. Support for streamed or server-hosted apps within a VM-hosted environment is also planned for the final release.

Launching multiple apps — Only one shortcut may be placed into the SeamlessInitialProgram folder. If more than one application needs to be launched, a shortcut to a script that runs the multiple apps should be copied into the SeamlessInitialProgram folder rather than multiple application shortcuts this would cause a user error upon launch.


  1. Automate to live, don't live to automate;
  2. 3 line cordless phone panasonic.
  3. split screen ipad cydia ios 9.
  4. Published Applications?
  5. samsung galaxy nexus mtp driver windows xp download.
  6. Navigation menu?
  7. Citrix and VDI what Is the Difference between them? Parallels Answers?

A scenario where this is particularly helpful is when you wish to launch helper apps that are streamed to the virtual machine or hosted on a XenApp server. The Citrix online plugin PNAgent and offline plugin needs to be running. The number of virtual machines VMs a block can typically host depends on the type of Horizon 7 VMs used.

To add more resource capacity, we simply add more resource blocks. We also add an additional Connection Server for each additional block to add the capability for more session connections. Typically, we have multiple resource blocks and up to seven Connection Servers in a pod capable of hosting 10, sessions. For numbers above that, we deploy additional pods. As you can see, this approach allows us to design a single block capable of thousands of sessions that can then be repeated to create a pod capable of handling 10, sessions.

Multiple pods grouped using Cloud Pod Architecture can then be used to scale the environment as large as needed. Important : A single pod and the Connection Servers in it must be located within a single data center and cannot span locations. Multiple Horizon 7 pods and locations must be interconnected using Cloud Pod Architecture.

In large environments, for scalability and operational efficiency, it is normally best practice to have a separate vSphere cluster to host the management components. Management components can be co-hosted on the same vSphere cluster as the end-user resources, if desired. This architecture is more typical in smaller environments or where the use of converged hardware is used and the cost of providing dedicated hosts for management is too high. If you place everything on the same vSphere cluster, you must configure the setup to ensure resource prioritization for the management components.

Sizing of resources for example, virtual desktops must also take into account the overhead of the management servers. One key design principle is to remove single points of failure in the deployment. The numbers, limits and recommendations given in this section were correct at time of writing. A single Connection Server supports a maximum of 4, sessions using the Blast Extreme or PCoIP display protocol , although 2, is recommended as a best practice.

Up to seven Connection Servers are supported per pod with a recommendation of 10, sessions in total per pod. The following limits have been tested. Just because VMware publishes these configuration maximums does not mean you should necessarily design to them. Using a single vCenter Server does introduce a single point of failure that could affect too large a percentage of the VMs in your environment.

Therefore, carefully consider the size of the failure domain and the impact should a vCenter Server become unavailable. A single vCenter Server might be capable of supporting your whole environment, but to reduce risk and minimize the impact of an outage, you will probably want to include more than one vCenter Server in your design. Sizing can also have performance implications because a single vCenter Server could become a bottleneck if too many provisioning tasks run at the same time.

Do not just size for normal operations but also understand the impact of provisioning tasks and their frequency. For example, consider instant-clone desktops, which are deleted after a user logs off and are provisioned when replacements are required. Although a floating desktop pool can be pre-populated with spare desktops, it is important to understand how often replacement VMs are being generated and when that happens. Are user logoff and the demand for new desktops spread throughout the day?

Or are desktop deletion and replacement operations clustered at certain times of day?

Remote mobile virtualization - Wikipedia

If these events are clustered, can the number of spare desktops satisfy the demand, or do replacements need to be provisioned? How long does provisioning desktops take, and is there a potential delay for users? A single resource block and a single vCenter Server are supported for the intended target of 8, instant-clone VMs; however, having a single vCenter Server for the entire user environment presents too large a failure domain. Splitting the environment across two resource blocks and therefore over two vCenter Servers reduces the impact of any potential outage.

This approach also allows each resource block to scale to a higher number of VMs and allow for growth, up to the pod recommendation, without requiring us to rearchitect the resource blocks. A single JMP Server is supported per pod. As part of the pairing process, the Horizon 7 Cloud Connector virtual appliance connects the Connection Server to the Horizon Cloud Service to manage the subscription license. With a subscription license for Horizon 7, you do not need to retrieve or manually enter a license key for Horizon 7 product activation.

However, license keys are still required for supporting the components, which include vSphere, vSAN, and vCenter Server. A single Cloud Connector VM is supported per pod. The Composer server is required only when using linked clones. Instant clones do not require a Composer server. Each Composer server is paired with a vCenter Server. For example, in a block architecture where we have one vCenter Server per 4, linked-clone VMs, we would also have one Composer server.

If the VMware View Composer service becomes unavailable, all existing desktops can continue to work just fine. While vSphere HA is restarting the Composer VM, the only impact is on any provisioning tasks within that block, such as image refreshes or recomposes, or creating new linked-clone pools. Instant clones satisfy all use cases, which means that linked clones and the Composer service are not required. If the requirements change, a separate server running the Composer service can easily be added to the design.

For high availability and scalability, VMware recommends that multiple Connection Servers be deployed in a load-balanced replication cluster. Connection Servers broker client connections, authenticate users, and direct incoming requests to the correct endpoint. Although the Connection Server helps form the connection for authentication, it typically does not act as part of the data path after a protocol session has been established. The load balancer serves as a central aggregation point for traffic flow between clients and Connection Servers, sending clients to the best-performing and most available Connection Server instance.

Using a load balancer with multiple Connection Servers also facilitates greater flexibility by enabling IT administrators to perform maintenance, upgrades, and changes in the configuration without impacting users. Connection Servers require the load balancer to have a session persistence setting. Secure external access for users accessing resources is provided through the integration of Unified Access Gateway UAG appliances. We also use load balancers to provide scalability and allow for redundancy. A Unified Access Gateway appliance can be used in front of Connection Servers to provide access to on-premises Horizon 7 desktops and published applications.

Five standard-size Unified Access Gateway appliances were deployed as part of the Horizon 7 solution. One of the methods of accessing Horizon 7 desktops and applications is through VMware Identity Manager. When SSO is enabled, users who log in to VMware Identity Manager with Active Directory credentials can launch remote desktops and applications without having to go through a second login procedure.

Active Directory credentials are only one of these many authentication options. Ordinarily, using anything other than AD credentials would prevent a user from being able to single-sign-on to a Horizon 7 virtual desktop or published application. After selecting the desktop or published application from the catalog, the user would be prompted to authenticate again, this time with AD credentials. True SSO generates unique, short-lived certificates to manage the login process.

For True SSO to function, several components must be installed and configured within the environment. This section discusses the design options and details the design decisions that satisfy the requirements. The enrolment server then passes the CSRs to the Microsoft Certificate Authority to sign using the relevant certificate template.

The Enrollment Server is a lightweight service that can be installed on a dedicated Windows Server instance, or it can co-exist with the MS Certificate Authority service. It cannot be co-located on a Connection Server. A single Enrollment Server can easily handle all the requests from a single pod of 10, sessions. The constraining factor is usually the Certificate Authority CA. Additionally, ensure that the certificate authority service is deployed in a highly available manner, to ensure complete solution redundancy.

Two Enrollment Servers were deployed in the environment, and the Connection Servers were configured to communicate with both deployed Enrollment Servers. The Enrollment Servers can be configured to communicate with two Certificate Authorities. It is recommended to change this to round robin when configuring two Enrollment Servers per pod to achieve high availability. DRS rules are configured to ensure that the devices do not reside on the same vSphere host. The following diagram shows the server components and the logical architecture for a single-site deployment of Horizon 7.

For clarity, the focus in this diagram is to illustrate the core Horizon 7 server components, so it does not include additional and optional components such as App Volumes, User Environment Manager, and VMware Identity Manager. Note : In addition to Horizon 7 server components, the following diagram shows database components, including Microsoft availability group AG listeners.

How NICE DCV Works

This reference architecture documents and validates the deployment of all features of Horizon 7 Enterprise Edition across two data centers. A key component in this reference architecture, and what makes Horizon 7 Enterprise Edition truly scalable and able to be deployed across multiple locations, is Cloud Pod Architecture CPA. CPA introduces the concept of a global entitlement GE through joining multiple pods together into a federation. This feature allows us to provide users and groups with a global entitlement that can contain desktop pools or RDSH-published applications from multiple different pods that are members of this federation construct.

This feature provides a solution for many different use cases, even though they might have different requirements in terms of accessing the desktop resource. The following figure shows a logical overview of a basic two-site CPA implementation, as deployed in this reference architecture design. Important : This type of deployment is not a stretched deployment.

Each pod is distinct, and all Connection Servers belong to a specific pod and are required to reside in a single location and run on the same broadcast domain from a network perspective. As well as being able to have desktop pool members from different pods in a global entitlement, this architecture allows for a property called scope. Scope allows us to define where new sessions should or could be placed and also allows users to connect to existing sessions that are in a disconnected state when connecting to any of the pod members in the federation.

Pods are joined together using Cloud Pod Architecture configured with global entitlements. A user is assigned to a given data center with global entitlements, and user home sites are configured. The user actively consumes Horizon 7 resources from that pod and site and will only consume from the other site in the event that their primary site becomes unavailable.

The pods are joined using Cloud Pod Architecture, which is configured with global entitlements. A user is assigned global entitlements that allow the user to consume Horizon 7 resources from either pod and site. No preference is given to which pod or site they consume from. The challenges with this approach are usually related to replication of user data between sites.

This architecture is unsupported and is only shown here to stress why it is not supported. Connection Servers within a given site must always run on a well-connected LAN segment and therefore cannot be running actively in multiple geographical locations at the same time. A common approach is to provide a single namespace for users to access Horizon pods deployed in separate locations. The use of a single namespace makes access simpler for users and allows for administrative changes or implementation of disaster recovery and failover without requiring users to change the way they access the environment.

The following diagram shows the server components and the logical architecture for a multi-site deployment of Horizon 7. Connection Servers and Composer servers run as Windows services. Horizon 7 is a multi-protocol solution. This protocol has full feature and performance parity with PCoIP and is optimized for mobile devices, which can decode video using the H.

Blast Extreme is configured through Horizon 7 when creating a pool. The display protocol can also be selected directly on the Horizon Client side when a user selects a desktop pool. Traditionally management and monitoring of enterprise environments involved monitoring a bewildering array of systems, requiring administrators to switch between multiple consoles to support the environment. These components are described here, and design options are discussed and determined. Other adapters can be added to gather information from other sources; for example, the VMware vSAN management pack can be used to display vSAN storage metrics within the vRealize Operations Manager dashboards.

A cluster consists of a master node and an optional replica node to provide high availability for cluster management. It can also have additional data nodes and optional remote collector nodes. Deploy nodes to separate vSphere hosts to reduce the chance of data loss in the event that a physical host fails. The initial, required node in vRealize Operations Manager.

All other nodes are managed by the master node. In a single-node installation, the master node manages itself, has adapters installed on it, and performs all data collection and analysis. When high availability is enabled on a cluster, one of the data nodes is designated as a replica of the master node and protects the analytics cluster against the loss of a node.

When you enable HA, you protect vRealize Operations Manager from data loss in the event that a single node is lost by duplicating data. If two or more nodes are lost, there might be permanent data loss. If vRealize Operations Manager is monitoring resources in additional data centers, you must use remote collectors and deploy the remote collectors in the remote data centers. Because of latency issues, you might need to modify the intervals at which the configured adapters on the remote collector collect information.

Remote collectors can also be used within the same data center as the cluster. Adapters can be installed on these remote collectors instead of the cluster nodes, freeing the cluster nodes to handle the analytical processing. A collector group is a collection of nodes analytic nodes and remote collectors. You can assign adapters to a collector group rather than to a single node. If the node running the adapter fails, the adapter is automatically moved to another node in the collector group. For enterprise deployments of vRealize Operations Manager, deploy all nodes as large or extra-large deployments, depending on sizing requirements and your available resources.

Use the spreadsheet attached to this KB to assist with sizing. Two large cluster nodes support the number of Horizon VMs 8, , and meet requirements for high availability. Although medium-sized nodes would suffice for the current number of VMs, deploying large-sized nodes follows best practice for enterprise deployments and allows for growth in the environment without needing to rearchitect.

The Horizon adapter obtains inventory information from the broker agent and collects metrics and performance data from desktop agents. The adapter passes this data to vRealize Operations Manager for analysis and visualization. The Horizon adapter runs on the master node or a remote collector node in vRealize Operations Manager. Adapter instances are paired with one or more broker agents to receive communications from them.

Creating a Horizon adapter instance on a remote collector node is recommended in the following scenarios:. Creating more than one Horizon adapter instance per collector is not supported. You can pair the broker agents installed in multiple pods with a single Horizon adapter instance as long as the total number of desktops in those pods does not exceed 10, If you need to create multiple adapter instances, you must create each instance on a different node.

A Horizon adapter instance was created on each of these nodes to collect data from their local Horizon pods. Separating the Horizon adapter onto a remote collector is recommended in environments of more than 5, desktops. The environment is designed for 8, users. Horizon adapter instances are not supported on a collector group.

Class Data Sharing in the HotSpot VM (Volker Simonis, Germany)

Using remote collectors for remote sites allows for efficient data collection. The broker agent is a Windows service that runs on a Horizon Connection Server host that collects Horizon 7 inventory information, and then sends that information to the vRealize Operations for Horizon adapter. It collects metrics and performance data and sends that data to the Horizon adapter. Metrics collected by the desktop agent include:. The vRealize Operations for Horizon desktop agent can be installed as a part of the Horizon Agent installation.

This service comprises multiple software components. VMware is responsible for hosting the Horizon Cloud Service control plane and providing feature updates and enhancements for a software-as-a-service experience. The cloud control plane also hosts a common management user interface called the Horizon Cloud Administration Console, or Administration Console for short.

The Administration Console runs in industry-standard browsers. It provides you with a single location for management tasks involving user assignments, virtual desktops, RDSH-published desktop sessions, and applications. The Administration Console is accessible from anywhere at any time, providing maximum flexibility. This section discusses the design options and details the design decisions that were made to satisfy the design requirements of this reference architecture. The following figure shows the high-level logical architecture of these core elements.

Other components are shown for illustrative purposes. This figure demonstrates the basic logical architecture of a Horizon Cloud Service pod on your Microsoft Azure capacity. The jumpbox is a temporary Linux-based VM used during environment buildout and for subsequent environment updates and upgrades. The management VM appliance provides access for administrators and users to operate and consume the platform. This cloud-based control plane is the central location for conducting all administrative functions and policy management.

From the control plane, you can manage your virtual desktops and RDSH server farms and assign applications and desktops to users and groups from any browser on any machine with an Internet connection. The cloud control plane provides access to manage all Horizon Cloud pods deployed to your Microsoft Azure infrastructure in a single, centralized user interface, no matter which regional data center you use. This component of the control plane is the web-based UI that administrators use to provision and manage Horizon Cloud desktops and applications, resource entitlements, and VM images.

Organizations can securely provision and manage desktop models and entitlements, as well as native and remote applications, through this console. The console also provides usage and activity reports for various user, administrative, and capacity-management activities. This design accommodates an environment capable of scaling to 6, concurrent connections or users. When creating your design, keep in mind that you want an environment that can scale up when necessary, and also remain highly available.

Design decisions need to be made with respect to some Microsoft Azure limitations, and with respect to some Horizon Cloud limitations. Horizon Cloud on Microsoft Azure has certain configuration maximums you must take into account when making design decisions:. To handle larger user environments, you can deploy multiple Horizon Cloud pods, but take care to follow the accepted guidelines for segregating the pods from each other. For example, under some circumstances, you might deploy a single pod in two different Microsoft Azure regions, or you might be able to deploy two pods in the same subscription in the same region as long as the IP address space is large enough to handle multiple deployments.

Each Microsoft Azure region can have different infrastructure capabilities. You can leverage multiple Microsoft Azure regions for your infrastructure needs. These deployments are a part of your Microsoft Azure subscription or subscriptions. A subscription is a logical segregation of Microsoft Azure capacity that you are responsible for. You can have multiple Microsoft Azure subscriptions as a part of the organization defined for you in Microsoft Azure.

A Microsoft Azure subscription is an agreement with Microsoft to use one or more Microsoft cloud platforms or services, for which charges accrue based either on a per-user license fee or on cloud-based resource consumption. Some of the limitations for individual Microsoft Azure subscriptions might impact designs for larger Horizon Cloud on Microsoft Azure deployments.

If necessary, you can leverage multiple Microsoft Azure subscriptions for a Horizon Cloud on Microsoft Azure deployment. Note : You might need to request increases in quota allotment for your subscription in any given Microsoft Azure region to accommodate your design. This strategy provides an environment capable of scaling to 6, concurrent connections or users, where each session involves a VDI desktop with 2 vCPUs or cores , making a total requirement of 12, vCPUs. Because the requirement for 12, vCPUs exceeds the maximum number of vCPUs allowed per individual subscription, multiple subscriptions must be used.

The operation and design of these services are considered beyond the scope of this reference architecture because it is assumed that no design decisions you make will impact the nature of the services themselves. You can manually build and configure Horizon Cloud pods to provide applications and desktops in the event that you have an issue accessing a Microsoft Azure regional data center. Microsoft has suggestions for candidate regions for disaster recovery. As was mentioned previously, Horizon Cloud on Microsoft Azure has no built-in functionality to handle business continuity or regional availability issues.

In addition, the Microsoft Azure services and features regarding availability are not supported by Horizon Cloud on Microsoft Azure. Each Horizon Cloud pod is a separate entity and is managed individually. VM master images, assignments, and users must all be managed within each pod.

No cross-pod entitlement or resource sharing is available. You can manually spread users across multiple Horizon Cloud pods. However, each Horizon Cloud pod is managed individually, and there is no way to cross-entitle users to multiple pods. Although the same user interface is used to manage multiple Horizon Cloud pods, you must deploy separate VM images, RDSH server farms, and assignments on each pod individually.

For example, you could entitle different user groups to have exclusive access to different Horizon Cloud on Microsoft Azure deployments, and then join each pod to the same Active Directory. Note : Although this method works, there is currently no product support for automatically balancing user workloads across Horizon Cloud pods.

You can configure each pod to provide access to desktops and applications for end users located outside of your corporate network.

By default, Horizon Cloud pods allow users to access the Horizon Cloud environment from the Internet. When the pod is deployed with this ability configured, the pod includes a load balancer and Unified Access Gateway instances to enable this access. In this case, you must perform some post-deployment steps to create the proper internal network routing rules so that users on your corporate network have access to your Horizon Cloud environment. You can implement optional components to provide additional functionality and integration with other VMware products:.

The following shared services are required for a successful implementation of Horizon Cloud on Microsoft Azure deployment:. Ordinarily, using anything other than AD credentials would prevent a user from being able to SSO to a Horizon virtual desktop or published application through Horizon Cloud on Microsoft Azure. This enhances security because no passwords are transferred within the data center. The Enrollment Server is responsible for receiving certificate-signing requests from the Connection Server and passing them to the Certificate Authority to sign using the relevant certificate template.

A single Enrollment Server can easily handle all the requests from a single pod. Additionally, ensure that the Certificate Authority service is deployed in a highly available manner, to ensure complete solution redundancy. With two Enrollment Servers, and to achieve high availability, it is recommended to co-host the Enrollment Server service with a Certificate Authority service on the same machine. App Volumes serves two functions.

An AppStack is a group of applications that are captured together. The AppStacks can then be assigned to a user, group, organizational unit OU , or machine, and can be mounted each time the user logs in to a desktop, or at machine startup. App Volumes also provides user-writable volumes, which can be used in specific use cases. Writable volumes provide a mechanism to capture user profile data, user-installed applications that are not, or cannot be delivered by AppStacks, or both.

This reduces the likelihood that persistent desktops would be required for a use case. User profile data and user-installed applications follow the user as they connect to different virtual desktops. This design was created for an environment capable of scaling to 8, concurrent user connections. Note : If you are new to App Volumes, VMware recommends the following resources to help you familiarize yourself with the product:.

The agents communicate with the App Volumes Manager instances to determine AppStack and writable volumes entitlements. AppStacks and writable volumes virtual disks are attached to the guest VM, making applications and personalized settings available to end users. The following figure shows the high-level logical architecture of the App Volumes components, scaled out with multiple App Volumes Manager servers using a third-party load balancer.

App Volumes 2. Most implementations will no longer benefit from enabling the Mount ESXi option.

VM hosted apps

You can enable the Mount Local storage option in App Volumes to check local storage first and then check central storage. Root-level access is not required. A detailed discussion of network requirements for App Volumes is outside of the scope of this guide. The App Volumes machine manager was configured for communication with the vCenter Server in each resource block.

Standardizing on the pod and block approach simplifies the architecture and streamlines administration. In a production Horizon 7 environment, it is important to adhere to the following best practices:. As with all server workloads, it is strongly recommended that enterprises host App Volumes Manager servers as vSphere virtual machines. In production environments, avoid deploying only a single App Volumes Manager server.

It is far better to deploy an enterprise-grade load balancer to manage multiple App Volumes Manager servers connected to a central, resilient SQL Server database instance.


  • la mejor aplicacion para bloquear tu android!
  • Features of NICE DCV;
  • TechTalk: Deep dive on VM Hosted Apps, New in XenApp!!
  • VMware Workspace ONE and VMware Horizon Reference Architecture | VMware;
  • What Is NICE Desktop Cloud Visualization?;
  • {{ data.message }}.
  • As with all production workloads that run on vSphere, underlying host, cluster, network, and storage configurations should adhere to VMware best practices with regard to availability. App Volumes Managers are the primary point of management and configuration, and they broker volumes to agents. For a production environment, deploy at least two App Volumes Manager servers. Deploying at least two App Volumes Manager servers ensures the availability of App Volumes services and distributes the user load. Although two App Volumes Managers might support the 8, concurrent users design, additional managers are necessary to accommodate periods of heavy concurrent usage, such as for logon storms.

    Configuring multiple vCenter Servers is a way to achieve scale for a large Horizon 7 pod, for multiple data centers, or for multiple sites. With machine managers, you can use different credentials for each vCenter Server, but vSphere host names and datastore names must be unique across all vCenter Server environments. After you have enabled multi-vCenter-Server support in your environment, it is not recommended to revert back to a single vCenter Server configuration.

    App Volumes supports environments with multiple Active Directory domains, both with and without the need for trust types configured between them. An account with a minimum of read-only permissions for each domain is required. You must add each domain that will be accessed for App Volumes by any computer, group, or user object. In addition, non-domain-joined entities are now allowed by default. Host configurations have significant impact on performance at scale.

    Primary Navigation

    Consider all ESXi best practices during each phase of scale-out. To support optimal performance of AppStacks and writable volumes, give special consideration to the following host storage elements:. For best results, follow the recommendations of the relevant storage partner when configuring hosts and clusters. For high performance and availability, an external load balancer is required to balance connections between App Volumes Managers.

    The main concern with App Volumes Managers is handling login storms. The greater the number of concurrent attachment operations, the more time it might take to get all users logged in. For App Volumes 2. VMware recommends that you test the load and then size the number of App Volumes Manager servers appropriately. To size this design, we assumed each App Volumes Manager was able to handle 2, users. The following figure shows how virtual desktops and RDSH-published applications can point to an internal load balancer that distributes the load to two App Volumes Managers.

    This database is a critical aspect of the design, and it must be accessible to all App Volumes Manager servers. A successful implementation of App Volumes requires several carefully considered design decisions with regards to disk volume size, storage IOPS, and storage replication. When new AppStacks and writable volumes are deployed, predefined templates are used as the copy source. Administrators should place these templates on a centralized shared storage platform.

    As with all production shared storage objects, the template storage should be highly available, resilient, and recoverable. AppStack sizing and writable volume sizing are critical for success in a production environment. AppStack volumes should be large enough to allow applications to be installed and should also allow for application updates. AppStacks should always have at least 20 percent free space available so administrators can easily update applications without having to resize the AppStack volumes.

    Storage platforms that allow for volume resizing are helpful if the total number of writable volume users is not known at the time of initial App Volumes deployment. Follow VMware best practices when managing thin-provisioned storage environments. Free-space monitoring is essential in large production environments. In a large App Volumes environment, it is not usually a good practice to allow user behavior to dictate storage operations and capacity allocation.

    For this reason, VMware recommends that you create writable volumes at the time of entitlement, rather than deferring creation. A storage group is a collection of datastores that are used to serve AppStacks or distribute writable volumes. In App Volumes 2. Having a common datastore presented to all hosts in all vCenter Servers allows AppStacks to be replicated across vCenter Servers and datastores. When using AppStack storage groups, the App Volumes Manager manages the connection to the relevant AppStack, based on location and number of attachments across all the datastores in the group.

    Technical overview. Active Directory. Delivery methods. XenApp published apps and desktops. VDI desktops. Network ports. Adaptive transport. Install and configure. Prepare to install. Microsoft Azure Resource Manager virtualization environments. Microsoft System Center Configuration Manager environments. VMware virtualization environments. Nutanix virtualization environments. Microsoft Azure virtualization environments. Install core components. Install VDAs. Install using the command line. Install VDAs using scripts.

    Create a Site. Create machine catalogs. Manage machine catalogs. Create Delivery Groups. Manage Delivery Groups. Create Application Groups. Manage Application Groups. Remote PC Access. XenApp Secure Browser. Publish content. Server VDI. Personal vDisk. Install and upgrade. Configure and manage. Displays, messages, and troubleshooting. Remove components. Upgrade and migrate. Changes in 7. Upgrade a deployment. Upgrade a XenApp 6. Migrate XenApp 6. Security considerations and best practices. Delegated Administration. Smart cards. Smart card deployments. Pass-through authentication and single sign-on with smart cards.

    Federated Authentication Service. Federated Authentication Service architectures overview. Federated Authentication System how-to - configuration and management. HDX 3D Pro. OpenGL Software Accelerator. Audio features. Flash redirection. HTML5 multimedia redirection. Windows Media redirection.