Upgrades Coming

It is an interesting day here at Unitek. Looks like we will be upgrading our Cisco CCNA classroom quite a bit! Shiny new Dell PC’s as well as a tremendous upgrade to our CCNA rack. We had been using very current 2621 series routers but looks like we are to upgrade to 2811’s which is pretty darn nice! It is one heck of an upgrade for the Cisco CCNA training. 22 new routers at around 5k each… I’m afraid to do the math hehe.

Anyways I can’t wait to get into them, all kinds of new labs and topologies. It really is going to be fun, for the Cisco CCENT training I can utilize the SDM a GUI or graphical user interface which will allow students to configure most everything necessary through a few mouse clicks! For my CCNA it won’t be that easy though 😛 CLI or command line interface for them!

Unitek’s New Free Microsoft CRM Add-on!

Unitek’s Microsoft CRM Training Team is pleased to announce that we have a new product for CRM 4.0 that we’re giving away! The add-on, Quick Activity, can be downloaded from our site here: Unitek Microsoft CRM Add-On

Quick Activity allows users to add History records with just a couple of clicks. It’s a nice alternative to the existing CRM activity, saving a few clicks per activity and quite a few clicks per day!

Come back to our blog soon and we’ll provide you another method of using the Quick Activity from the entity grid. This will let you create history records for an item without actually opening the record!

NetApp Data ONTAP – Installing Your Simulator

Installing your simulatorOnce you have downloaded the simulator you are ready to install. First you need a Linux host. Earlier versions of the sim – before 7.0 – required Red Hat 7.1. They would not run on anything else. Since Data ONTAP 7.0 and later the simulator has run on every version of Linux that I have tried. This has included Red Hat, various Red Hat Clones, Suse Linux as well as Debian.

In addition, the system running Linux should have two Ethernet interfaces. They can be real physical interfaces or, if you are running in a vm, they can be virtual interfaces configured as part of the virtual machine.

The reason for this is that Data ONTAP will take control of an interface that will be used for the simulated storage system. At install time it will ask you which interface to use. The default is Eth0. Therefore you will need another interface to communicate to the Linux system. (Usually Eth1, if you accept the default.)

The install:

Once you have downloaded the simulator tgz file from Netapp and copied it to your Linux box, you are ready to begin. The simulator must run with root privileges, so I usually do the entire install with the root account.

Go to the directory were you loaded the simulator file and run the following commands:

gunzip 7.2-tarfile-v22.tgz
tar xvf 7.2-tarfile-v22.tar

The file you downloaded may have a different name. If so, substitute the file names that correspond to the file you downloaded
Usually there is a script called is setup.sh which will create your simulator. Enter the following command:

./setup.sh

This will walk you through the setup.

As you notice from the script output above, you have the option of installing the simulator as a cluster. Later we will make use of this feature, but for now I did not install as a cluster.

Notice we were given an option of which host interface to use. I took the default: Eth0. Generally the default memory size of 128 is adequate but you can assign more if you’d like.

Finally we are given the option of installing more disks. The simulator has 3 disks already, but they are only 100 MB virtual disks. To do anything useful, we will need more space. Although the disks are virtual, the space the take within you Linux host is real, so you may need to adjust this to fit the storage situation on your Linux host.

I added 11 drives, filling out my virtual shelf. I also chose option e for drive size. This is a fairly useful size though you could certainly go large if you wished. As you see, the virtual drives are then created and the script ends.

There is a catch here. The drives created have bad headers. Next time we’ll cover how to repair the drives and your simulator will be ready to do some work.

Send Emails to Deactivated Leads Without Needing to Activate Them First

It happens often that a sales rep wants to send an e-mail to a lead that has been deactivated and they either don’t want to or don’t have the permission to activate the Lead. As you know once a Lead record is deactivated you cannot log an activity against it including sending an E-mail.

As a CRM consultant, recently I had a client request to provide a way for their sales rep to be able to send Emails to deactivated Leads. What follows is the steps required to make this happen.
The solution involves providing a button on the Tool Bar of the Lead record. This button once clicked will open up an E-mail activity record for the deactivated Lead.

This blog entry assumes that the reader already knows how to customize the Tool Bar by adding a custom button. If you don’t know, please let us know and we will provide the instructions.

Here is the custom code to accomplish the above:

Two points to remember:

  1. Please know that copying this code as is might be problematic with the use of quotations. Once you copy the code, you will need to go through and replace the single and double quotes using your keyboard.
  2. In http://crmsvr/activities/email/edit.aspx?pId=, you have to replace “crmsvr” with the name of your Micorosft CRM server.

Code

<!DOCTYPE html PUBLIC “-//W3C//DTD XHTML 1.0 Transitional//EN” “http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd”>

<html xmlns=”http://www.w3.org/1999/xhtml” >

<head>

<title>Quick Email</title>

</head>

<body>

<h1 align=”center”>Please Wait</h1>

<script type=”text/javascript”>

var oOpenLead = window.dialogArguments;

var oLead = oOpenLead.document.crmForm;

var FirstName = oLead.firstname.DataValue;

var LastName = oLead.lastname.DataValue;

var GUID = oLead.ObjectId;

PA = window.open(“http://crmsvr/activities/email/edit.aspx?pId=” + GUID +“&pType=4&pName=”+FirstName+“%20”+LastName+“&partyid=”+ GUID +“&partytype=4&partyname=”+FirstName+“%20”+LastName+“&partyaddressused=&contactInfo=”,,‘menubar=0, status=1’);

window.close();

</script>

</body>

</html>

RM
Microsoft CRM Consultant
Unitek Microsoft CRM Services

Cisco Certification – What’s the Big Deal?

What is this whole certification thing about? I mean why would someone want to become Cisco Certified? What reasons could anyone have for wanting to get CCENT or CCNA training, let alone CCNP training, when they could just sit down with a good book and read it for themselves?

Well let me tell you something. It’s not as easy as it sounds to just pick something up and figure it out. How long did it take for man to figure out this thing called fire or how about the wheel? Trial and error is an incredible way to truly know what works and what doesn’t and how much time and effort is involved with trying every possible permeation…I have only found three ways of learning something, and figuring it out for myself takes the longest by far.

Reading a book, well to be more specific—several books, by separate authors all focused on Cisco Certification is the next largest time sink. Who has the ability or time to burn through the recommended 3600 pages of material? Realistically, you will have to read those books two, if not three times, to truly get a good handle on the topics.

Of course, that’s only the first batch by author #1. What about someone else’s spin on the material? Each author will have a separate take on it, putting a different spin on trying to explain it. In the end, you will have 1500 pages of reading to tackle for your CCNA, and 10K to square up to your CCNP. Don’t even get me started on my recommended reading list for the CCIE!

OR…

…You could ask someone who has been there. Someone who has beaten their heads on a desk for days on end. Someone who has read those pages. Someone who has made it work.

Sometimes it is really quite hard to draw a picture about a technical topic using the written word. But remember what ‘ma’ always said. “A picture is worth a thousand words.” Here is an example.

According to Cisco http://www.cisco.com/warp/public/556/8.html
Term Definitions
Cisco defines these terms as:
Inside local address—The IP address assigned to a host on the inside network. This is the address configured as a parameter of the computer OS or received via dynamic address allocation protocols such as DHCP. The address is likely not a legitimate IP address assigned by the Network Information Center (NIC) or service provider.
Inside global address—A legitimate IP address assigned by the NIC or service provider that represents one or more inside local IP addresses to the outside world.
Outside local address—The IP address of an outside host as it appears to the inside network. Not necessarily a legitimate address, it is allocated from an address space routable on the inside.
Outside global address—The IP address assigned to a host on the outside network by the host owner. The address is allocated from a globally routable address or network space.
…Yeah, why not just say, or better yet draw:

(click to enlarge)

An illustration like this can make all the difference in the world when you are trying to earn you CCENT, CCNA or CCNP or heck any Cisco certification. This is the reason you take a class. To SEE and understand not just READ.

Finally, all the Longhorn pieces are falling into place

You can feel the excitement as all the pieces are coming together. It all started with the first release of Vista, not the real release, but the first. This was our first NT v6, code named Longhorn, to replace the older NT5 with its 5.1 and 5.2 – being Windows 2000, Windows XP, and Windows Server 2003. Longhorn has architectural changes, to improve security. Vista in Nov of ’06 was when Longhorn (Longhorn Client) got started with its unique security features such as UAC and IE in Protected Mode.

Then it got more exciting with the Beta 3 of Longhorn Server (LHS) in Apr ’07. The product was basically “feature complete” in its Beta 3, and because of that, Microsoft surprised everyone by allowing recertification on LHS in October ’07 – before the product RTMed. LHS received its formal name in June ’07 as Windows Server 2008. We didn’t know if it was going to be 2007 or 2008.

Then take it up a notch with the RTM of LHS (Windows Server 2008) in February of this year (2008) – February 4th was the date for history. LHS is significant, primarily because of the unique Longhorn (NT6) architectural security features – mentioned above, and as a bonus: Server Core, Server Manager, RODCs, BitLocker, major new Terminal Services, etc… One of the most exciting features is the newly named Hyper-V (was named WSv) – the next generation Virtualization from Microsoft. Final release of Hyper-V is promised within 6 months of the Feb RTM. The pleasant surprise is that a Beta Hyper-V shipped with the product and can easily be upgraded to the current RC0 of Hyper-V.

Plus, we now have the “real” Vista – Vista SP1. – the Vista that goes with Server 2008, like the matched pair of XP SP2 and Server 2003 SP1.

It’s always been recommended that you manage the environment (Desktops, Servers, and DCs) from a Desktop, not from the Server consoles. With XP SP2, you just installed the Administration Tools (ADMINPAK.msi) from any Server’s ADMIN$ or downloaded the latest Tools from Microsoft. But then with Vista as the latest Desktop, we had a problem: do not install ADMINPAK on Vista – so now what?

As I mentioned at the start, you can feel the excitement as the pieces are coming together. With Vista SP1, you can now download and install, RSAT – the new Remote Server Admin Tools to replace ADMINPAK. And, [drum roll], the final piece……… You can add the Hyper-V console to the RSAT on Vista SP1.

Viva La Cisco Revolucion!

We are joining the revolution here in the voice department of Unitek by starting our own blog.

Here we go.

The biggest subject on everyone’s lips? The new version of the Cisco CCVP Training course that we run.

We have currently upgraded our courses to the latest CVOICE v6 and CIPT 1 v6, and CIPT 2 v6.

Currently there are two tracks to reach CCVP.

You can attend the CCVP Training classes in CallManager 6 or CallManager 4.

You can then sit one of the two tracks:
CM4 – CVOICE, QOS, TUC, CIPT v4, GWGK
CM6 – CVOICE, QOS, TUC, CIPT1 v6, CIPT2 v6.

Alright, that’s it for now, so stay tuned for more on the revolution!

Introduction from a Microsoft MCSE

Hey Everyone!

 

I’m going to have the chance to share some really interesting stuff with you guys in the near future, so thought it’d be appropriate to take a moment to first introduce myself to you all.

 

I’m Deepika and I have 9 years of experience in teaching and consulting Microsoft Technologies.  I am a Microsoft MCSE on NT 4, 2000, 2003, MCSA, MCDBA, MCTS and MCITP on SQL 2005 & Exchange 2007.

 

Get ready for some quick Microsoft Certification and Microsoft Exam tips as well as notes about really cool features with various Microsoft Technologies.

 

I’ll check back in with you all soon, but for now, just as a quick tip: www.google.com/microsoft will be your best resource for help on Microsoft Stuff.

 

🙂

 

Advanced IMA – Compatibility Mode – Part 2

Changing farm membership

The admin at the second, questionable server, should go to the command prompt and type “chfarm” – the “Change Farm” utility. Changing farms takes about two minutes, and costs nothing else. As long as we are not on the data store server, we can simply “chfarm” and tell the system we are “creating a new farm”, then we are pulled out of the old farm and launched into the new. Within a couple of minutes, the server says “Farm membership changed successfully” as IMA restarts itself. If the admin is then able to start the management consoles, in the new and empty farm, then there is nothing wrong with the server DLL’s, and the problem was likely just ODBC connectivity. To finish fixing the problem, then, the admin spends two more minutes, changing farm membership again, this time into the existing farm. The IMA data store forgets any ODBC connectivity issues is ever had with this server, and the server re-establishes itself as “connected”.

The same command used above for ODBC connectivity troubleshooting – “chfarm” – is also used in the scenario where we want to permanently change farm membership, possibly migrating a test server from the test farm to the production farm.

When a server changes farms, it abandons the old farm database, and looses along with it any published apps, or any other configuration settings that had been done in the old IMA data store. It comes to the new farm as a Citrix Terminal Server, with apps installed, but nothing published, and now taking all the defaults of the new farm.

When moving servers between farms, documentation is very important, because in Citrix there is no command to ask a server where it thinks the data store is. There is, however, a registry key, at “HKLMsoftwareCitrixIMA”, called “PSSERVER”, and the value of the “PSSERVER” registry key is the DNS name of the data store server. If there is no “PSSERVER” registry key, that server is the IMA data store with an Access DB.

Splitting Zones

The IMA Zone Data Collector is like the IT manager of it’s “zone”, and the Data Store is like the CIO – just one for the whole company, even if there are multiple “zones” and so multiple “ZDC’s”. If there are two or three servers in the farm, the ZDC is just another production server, serving apps to ICA clients, even though it also has the extra duties of maintaining the “IMA Dynamic Store” of load management information.

After about 50 servers in a zone, (a fuzzy number based on CPU utilization on the ZDC), Citrix recommends a STAND-ALONE ZDC, as well as a stand-alone Citrix License Server. Technically the maximum number of server allowed in a single zone is 512, and there is an IMA registry key to up this number higher if necessary, but if we are using average level machines, the real limit will come long before “512” servers in a zone. the real limit comes when the ZDC just can’t get all it’s work done anymore – first it is being slow for the users connected to apps on that server, then it starts having trouble “enumerating” the apps on the client screens.

At this point we need a second ZDC for the zone, but we have to work around “IMA Law”, which demands there be only ONE ZDC per zone. So just like with an IT department of 50 or more people, and a “stand-alone” manager, we can work around the 1 manager per department stipulation, by splitting the IT department. We break up the 50 people into “help desk”, and “server support”. Now there are two departments, and so two managers, where there used to be one; we use the same strategy with IMA zones. After about a hundred servers in the zone (or somewhere between 100 and 512), we will want to artificially “split the zone”.

Though the default IMA zone configuration is based on IP sub netting, this is only for convenience, and doesn’t have to be the case. So we don’t have to subnet anything differently in order to get two zones, and two data collectors. We simply let “IMA law” and “IP Sub netting” diverge.

We go the PSC farm properties, to the “zones” tab, and use the “New Zone” button to create “zoneB”, then choose some of the servers in the first zone and click “Move Server” to move it to the new zone. (This is an after-hours task because we have to reboot the IMA servers we move.)

Advanced IMA_5

After moving some servers into the new zone, we ought to set a “Most Preferred” and “Preferred” Zone Data collector, and document what was configured, for use in client-support. At this point we have two data collectors, where before there was only one, and nothing has changed as far as sub netting.

Advanced IMA_6

Collapsing Zones

Citrix recommends MINIMIZING the number of zones in the Citrix farm. They say the number of zones “exponentially increases” the amount of WAN traffic. More recently they have said that more than 25 zones in a farm doesn’t work.

If the Presentation servers are spread out in multiple locations, by default there are multiple zones. Usually, in this case, there is room for optimization.

Advanced IMA_7

In the diagram above, the admin goes to each location, adds a new server to the farm over port 2512 in the firewall, and because each location is a different subnet, the admin winds up with a ZDC in every location. Still, most of the servers are centralized at headquarters, and only a few servers are distributed across the WAN, in order to serve some back-end DB app.

The issue here is just what kind of traffic, and how frequent, go over the 2512 port to the ZDC’s around the world.

IMA data collectors communicate over the WAN, over port 2512, and transmit any “changes”. Changes could be things like publishing a new app. but changes could also be the fact that a USER LOGGED IN, OR OUT!

Advanced IMA_8

So the problem with the IMA defaults for multiple subnets, is that in the scenario above, if any ONE USER simply logs on, or off, there is ZDC communication with every other location in the farm, because all the locations have ZDC’s.

It goes back to the analogy of a ZDC being an IT manager. We have a big IT department, with a manager, at headquarters. Then we start a new location, and staff it with one IT person. The question is whether to make the one IT person a “manager”, or not. If they are made “manager”, then they can’t get any work done because they are on the phone all day in conference calls with their counterparts at the other locations. And so the solution is we do NOT make the lone IT person a manager, but simply an employee who reports to a manager at another location.

So the same might need to be done to the IMA zone configuration, when we have a larger HQ and a bunch of smaller remote “sites”, with Citrix servers. By default, these servers are all “managers”, or “ZDC’s”. Again, without affecting IP sub netting in any way, we want to “collapse” the zones in this case, so that there is NOT a separate ZDC at each location, and then we DON’T have to dial up each location every time someone logs on or off at HQ.

To collapse the zones, we just go back to the PSC farm properties, “zones” tab, and “move servers” into an existing zone. Again, this is for after-hours because we will have to reboot the servers. And one more thing in this case, we have to “delete” the empty zone, according to “IMA law”.

Zone Preference and Failover

Presentation Server 3 and PN Agent 8 introduced a new feature that increases the availability that the Presentation server can provide. ZPF, or Zone Preference and Failover, is a solution to add availability of applications even when an entire site is down – an entire IMA zone. This can also be used for a failover plan to a backup data center, with the backup data center being another zone in the same farm.

Before Presentation Server 3, the only way to publish an application properly was to publish it once for all the servers in one zone, and apply that app to people appropriate to that zone, then to publish the app again, this time on the servers in the other zone, for the users appropriate to that zone, so that there would end up being east coast users accessing an application that ran in NY, and a group west coast users accessing a different published app that ran on servers in LA.

The problem here is that if one site goes down, there may be connectivity and ample server resources on the other side of the WAN, but there was no way to seamlessly start utilizing those resources in a failover scenario.

What some people were doing was publishing the application across multiple zones, and saying everybody could access it. The advantage was that there were servers automatically available even when one whole site was down, automatically. The disadvantage was the huge increase in network traffic, as a user had to span the WAN just to connect to any app, checking if the other side of the WAN didn’t have a less busy server.

Presentation Server 3 introduced ZPF, which is a feature in the PSC policies and only works with PNAgent ICA client software.

Advanced IMA_9

Rather than sharing load information across zones, administrators can now create policies, one for each IMA zone, where a preferred zone is defined, and a list of backup zones can be defined, from bkp1 to bkp10. The group east coast users, then, would be applied to a policy in the PSC, with a preferred zone on NY, and a backup 1 zone of LA. The west coast users group would be assigned a different policy, with the LA zone the preferred zone and the NY the backup 1 zone.

Advanced IMA_10

While no load information would be shared across zones, in the event that all the Presentation Servers at a particular site are inaccessible, there can be an automatic failover of access to the next geographically convenient set of servers, as defined manually by the Citrix administrator.

The failover plan can be used in a disaster recovery scenario, where there are two data centers, a replicated SAN between them, and some front end SSL/VPN with access to pre-configured policies. If the primary data center were to fail, a strategically placed logon access point could redirect users to a failover zone / data center seamlessly, from the PN Agent client software.

Recovering from a failed data store

One of the objections sometimes to implementing Server Based Computing in the enterprise is the argument against putting all the eggs in one basket, in that if the applications all run centrally; the central server farm is a single point of failure. The Citrix implementation, however, can be put together in such a way that it is reliable, with failover mechanisms to backup data centers in place for catastrophic failure.

The Citrix Presentation Server farm consists of application servers, data collector(s), a license server with a valid license file, a data store, and applications and data. The applications and data are unique in any environment, but the rest is constant and can be supported by best practices.

The application servers are not a single point of failure as long as the N+1 model is used (or the N+25% model in a large implementation), where there is always one more physical server available than is actually required for the user base at maximum load. This way the farm can tolerate losing any one server without noticeable user impact.

The data collector is not a single point of failure either, as a new one will be elected dynamically if the regular one goes down, or cannot be contacted. As long as the clients have a way of getting to the other servers, such as multiple IP addresses in Program Neighborhood, the data collector’s failure should not produce an impact on the functionality of the farm.

The license server is 30-day fault tolerant, and in Enterprise version an alert can be set with Resource Manager to send an email within minutes of License Server Connection Failure. If the license server reconnects at any time in the thirty days the problem resolves itself. If the server is not going to come back up, then the license file, digitally signed with the case-sensitive hostname of the old license server, is the critical component. The license file, a *.lic file, can be backed up to a thumb drive separately, and restored to a new server with the same name of the old license server, and the Citrix License server software installed. The license server can be supported by Microsoft Clustering services. The license file itself is available on the myCitrix website, and can be returned and reallocated to a new license server name as well, in case of an emergency.

The data store is the central repository where almost the entire Citrix implementation is invested. The Administrators of the farm, the license server to point to, the whole farm configuration, the published applications, all their properties, the security of who gets access to what, the custom load evaluators, custom policies, configured printers and print drivers, all this is stored in the central repository called the data store. After an implementation has been around for a while, this repository is extremely unique, and unless it is documented completely down to the hidden detail screens, the farm data store needs to be protected.

The data store is either a SQL, Oracle, or DB2 database on a server outside the actual Presentation Server farm, or else it is an Access or MSDE database on one of the Presentation servers, called mf20.mdb (which showed up with MetaFrame XP, right after MetaFrame 1.8). If it is on the external server, then leaving it up to the DBA’s to back it up isn’t good enough; the data store can become corrupt, and without a solid known good backup of the data store, a series of recent tapes could be suspect or worthless. Since so much is invested in the Citrix data store, a separate copy of the database, from 5 to 20 MB in size, should go on the thumb drive with the license file. That thumb drive, plus a Window server CD and a Citrix server CD, plus the apps and the data, are the Citrix deployment, (apart from any web configuration files in the Web Interface and Secure Gateway or Access Gateway).

The data store itself becomes the single point of failure in some farms, but like the license server it is 30-day fault tolerant, and alerts can be configured in Resource Manager for Data Store Connection Failure as well. As long as the data store is backed up with a known good copy, the data store server in the Presentation server farm is easily replaced.

Unlike the license file on the thumb drive, that is tied to a particular server name, the data store is not tied to a particular server name, and can reside on any Citrix server in the farm, or another server can be built, inserted into the farm, and then host the data store. The SQL Express, SQL, Oracle, or DB2 data stores should be backed up by the utilities that come with them. The Access data store on the first Citrix Presentation server in the farm by default, is backed up with the command dsmaint backup path:, and then can be restored at any time to any Citrix Presentation server in the farm.

When the data store becomes unavailable, the PSC will not launch and the farm will be unmanageable. Users should still be able to access the application servers that are still left, as one of those servers has to be elected the data collector, and that is all the users need in order to connect to already-existing applications. But the farm is also unconfigurable, until the data store is back online.

To restore the data store to a different server, or just to move it to a more convenient place on the network, the procedure is as follows:

  1. place the mf20.mdb that was backed up in the proper directory: C:ProgramFilesCitrixIndependent Management Architecture;
  2. create a file dsn to the new data store;
  3. run dsmaint config /user:user /pwd:password /dsn:path to dsn on the new data store server and restart IMA;
  4. run dsmaint failover newdatastoreservername on all the other servers in the farm and restart IMA

To create a dsn file, go to the control panel, administrative tools, of the Citrix server that holds the new data store, and go to “Data Sources (ODBC)”. On the tab marked “file dsn”, create a new file, with Access 4.0 drivers, that is in the same directory as the mdb file is, and can be named anything, but for convention should be mf20.dsn. on the final screen, the actual database that the dsn file is supposed to point to must be selected. Under the select button, highlight the proper database, (not the imalhc.mdb but the mf20.mdb) and close the utility.

There should now be a dsn file in the “Program Files/Citrix/Independent Management Architecture” directory of the server that is about to become the new data store server.

When the servers first join the farm, they need to know where the data store is supposed to be. They log this server’s name in their registry, under HKLM, in the IMA key. When the data store moves, even if it moves directly onto a server, the IMA configuration doesn’t automatically discover the new farm location with some kind of broadcast. The servers are still looking for the data store on the old server, and start their 30-day countdown to stop receiving connections. After the data store is moved to a different server, the new data store server needs to be told that it is the new data store, and then all the other servers in the farm need to be told the new name of the data store server, so they can failover.

Although the identity of the Data Collector can be seen with the QFARM tool, and with the QUERYDC tool when the data store is unavailable, the identity of the actual data store, and the value of the key that says where a server thinks the data store is, are not readily available through a Citrix command line utility. Therefore administrators should be very careful to document where they are putting the data store, and to make sure all the servers in the farm are pointing to the correct server.

To tell a server that it is the new data store, there is a simple command line with three switches, that ends up looking something like dsmaint config /user:Administrator /pwd:password /dsn:”C:Program FilesCitrixIndependent Management Architecturemf20.dsn”, but of course this could be prepared for ahead of time and made into a script. Once the command line returns the word “Successful”, the IMA service can be restarted, and the one server is back to having management capability.

But the rest of the servers in the farm are still on their 30-day countdowns. The command to failover the other servers in the farm to the new data store server is dsmaint failover newserver. After that, the IMA service needs to be restarted (net restart imaservice)

Once IMA restarts successfully on all the servers, the Citrix implementation is back into full manageability.

The Access or SQL Express data store can be easily migrated with the dsmaint utility. The option is dsmaint migrate, and instead of three parameters, as in the dsmaint config command, there are six: src user, src pwd, src dsn, and dest user, dest pwd, and dest dsn. A dsn file is set up to the new SQL, Oracle or DB2 database, and the data store is migrated.

If a server needs to be removed from the farm, the proper way is to either chfarm, or to uninstall the Citrix software. If for some reason a server has left the farm unexpectedly, and is gone for good, but still has vestiges hanging around in the PSC, then there is a command line utility to check and if necessary clean the data store. The command is dscheck, and dscheck /clean to actually clean up the inconsistencies.

Strengths and Weaknesses of Your NetApp Simulator

I’d like to get started with the Netapp’s simulator.  This is a very useful product that simulates a Netapp storage system.  I will be using it extensively in this blog so that even if you don’t have a Netapp storage system available you can still follow along.

What it is:

The simulator is a wonderful tool to learn about Netapp storage systems.  If you are currently supporting Network Appliance products and don’t have a lab or test system at work it is a great way to test concepts before implementation.  To understand the simulator you should realize that it is running real Data ONTAP code.  The Data ONTAP operating system is not simulated.  It is real.  What is being simulated is the underlying hardware and this brings with it some limitations.  The Network Appliance product that most closely resembles the simulated hardware is a FAS270.  Still, having your own “FAS270” to play with is pretty useful.

Limitations:

You will have to scale your test implementations down, but I have still found it to be very useful.  Like a FAS270, the simulator only supports two (simulated) Ethernet network ports and does not support fibre channel ports.  So you can’t use it to test ideas with fibre channel LUNs or run fibre channel diagnostic commands.   It does support iSCSI LUNs, though.  The number and size of simulated disks is limited, but is large enough to be useful. 

Performance will also not be indicative of real hardware.  We can test proof of concept scenarios, but not scalability or sizing.

Strengths:

The simulator supports clustering, so it is possible to build two simulators in a cluster configuration and test various failover scenarios.  Virtually all of the products Netapp has available for licensing are available with the simulator, so if you are interested in testing anything from LUN clones to SnapLock, it can be done with the simulator.  Simulator specific licenses are available for free. 

The simulator is available for free download from your NOW account.  You will either need to get a now account or talk to someone who has one to download the simulator. 

After logging in to NOW, go to the service and support page and look for Toolchest.  Select the toolchest and scroll down the page.  Items are in alphabetical order.  Look for “simulate ONTAP”.  It is currently item number 64, but this will probably change over time.

Environment: 

The simulator requires a Linux host.  Virtually any version of Linux will work with version 7+ of the simulator.  I have experienced Red Hat, SUSE and Ubunto without problems.  It is best to have two Ethernet ports on the Linux machine.  It also works fine in a VMware virtual machine, again configured with two virtual NICs.

Next:  setting up your simulator