Global Name Zone in new DNS

I really liked this new feature GLOBALNAME ZONE in new DNS in server 2008 and thought it was particularly neat.

To help customers migrate to DNS for all name resolution, the DNS Server role in Windows Server 2008 supports a special GlobalNames Zone (also known as GNZ) feature. Some customers in particular require the ability to have the static, global records with single-label names that WINS currently provides. These single-label names typically refer to records for important, well-known and widely-used servers for the company, servers that are already assigned static IP addresses and are currently managed by IT-administrators using WINS. GNZ is designed to enable the resolution of these single-label, static, global names for servers using DNS.

GNZ is intended to aid the retirement of WINS, and it’s worth noting that it is not a replacement for WINS. In GNZ, after the creation and enabling of the GlobalNames zone, the administrators must manually create, add, edit and, if required – delete, name records from that zone. GNZ does not support dynamic updates.

So lets start taking the advantage of this feature.

Space Management and Data ONTAP– Part One

The subject of space management always seems to generate some interesting discussions, particularly in my Data ONTAP Fundamentals class.  I think part of the reason for this is the terminology surrounding some of these concepts.

The first concept I want to clarify is space guarantees as they apply to flexible volumes.  A flexible volume is actually a WAFL file system inside an aggregate.  It is possible to have one or more of this volume/filesystems inside a single aggregate.  This seems simple enough, but, as with many things WAFL related, it is not as simple beneath the surface.  WAFL just behaves differently compared with other filesystems that I am familiar with.  For example, one of the things I noticed when I first began working with Data ONTAP is that WAFL filesystems were never formatted.

Normally, when a filesystem is created a particular set of block is assigned to that filesystem and   then organized by writing certain data structures onto those blocks.   A WAFL filesystem behaves more like the concept of sparse files in UNIX.  When a sparse file is created, the meta data is updated like a normal file, but blocks containing empty space are not updated.  The actually size of the file and the apparent size of the file will be different.  The actual size of the file will change as data updated or written into the file, but the apparent size of the file, as reported in the file system will not change.

WAFL volumes are like this.  When the volume is created, metadata within the aggregate is updated to reflect the presence of the volume, but specific blocks are not actually assigned to the volume until we need them to store something.  When a volume is created we have the several options that control when the space for that volume will be set aside, but specific blocks will not be assigned until they are needed.

If space guarantee is set to volume – the default behavior – then a certain number of blocks from the total in the aggregate will be set aside for use by that volume.  This “guarantees” that I will have enough space within the aggregate to supply that volume.  Here that looks like from FilerView:

img1_112309

We can check the settings on a volume by using the vol options command:

img2_112309

Notice here that guarantee=volume is set for vol3.

If space guarantees are set to none when the volume is created, metadata within the aggregate is updated to reflect the presence of the volume, but the number of blocks that would reflect the size of the volume is not subtracted from the total number of blocks in the aggregate.  In this case, it is possible to create a volume bigger than the aggregate.  This is called “thin provisioning.”   If this volume were a Windows share, then a windows user who checked the properties value for the volume would see the volume size that the storage administrator used to create the volume, but he would not be able to actually write more data into the volume than could be accommodated by the aggregate.

If is also possible to have more than one volume in an aggregate without space guarantees.  In this situation, blocks will be assigned to the volumes as they are needed on a first come, first served basis.  Anyone looked at the volume from the user’s perspective would see the volume size the storage admin created, even though there are not enough blocks to supply the space reported.

In either case, it is up to storage administrator to stay ahead of the situation and grow the aggregate before the blocks are actually needed.

Improved Exam Security Components

It’s about time! Cisco has finally improved the security components of taking an exam. When you sit for your CCNA or CCNP exam you will be required to show two forms of identification. A driver’s license and a credit card will suffice as long as your signature matches. Also while at the signing up for your CCNA, CCNP exam you will now be photographed and a digital signature will be captured. These measures are to help reduce the number of proxy test takers. A proxy test taker is someone who takes your CCNA exam for you as an example. Also Cisco has implemented additional test security measures, namely exam forensics. What this means is while you are taking your Cisco exam, they have metrics on how long a question should take, if you start blowing through your exam faster then what is considered normal you will set off all sorts of alarms. Cisco then reserves the right to hold your exam and review the manner in which you answered your questions. If they feel you had a preconceived notion on the exam answers you will be asked to resit the exam at Cisco’s expense. If you are unable to pass or do not agree to this retest, Cisco will not recognize your achievement. I feel this is a step in the right direction. Too many people were getting their CCNA & CCNP certifications by cheating.

Flash Drives and NetApp

NetApp recently announced their direction on flash drives. The most interesting aspect of this was not just that they intend to support them, but that support will be implemented not just as plug in disk drives, but also as an extension of the WAFL cache.

One of the problems with spinning disk drives as we know them is the capacity/performance tradeoff. Drive capacity has been increasing at a faster rate than drive performance.

In many, if not most, workloads there is a relationship between capacity and demand. It may or may not be linear, but it is there. The more capacity I have the more requests there will be to access that data. If we put larger and larger amounts of data behind fewer spindles then we are bound to have performance problems. We will be constrained by the performance of the spindles. This problem will be greatest for heavy random read workloads. WAFL is very efficient and turning random writes into sequential writes, so the problem is more likely to occur with reads.

The traditional way to attack this problem is to increase the number of disk drives. This means buying more capacity than we need to support the number of IOPs we require. In addition to the up-front cost of the drives, we are going to be spending more for power and cooling to keep those drives spinning.

Recently, NetApp announced Performance Accelerator Modules. These are 16GB DRAM-based intelligent read cache boards that plug into PCIe slots in the storage controller. By increasing the available memory for WAFL to cache read data, PAM modules can substantially reduce latency of read requests. Since these requests are fulfilled without ever reaching the storage stack, they are very fast – faster even then the request could be returned from a flash drive. NetApp is currently working a second generation PAM cards that will use higher density Flash technology to increase the capacity of these cards.

The issues with Flash drives revolve primarily around price/performance. Flash drives can provide 20-30 times the random-read performance of spinning disks at 10 times the cost per megabyte. The current high cost eliminates many applications, but they are suitable for high ROI applications. Where they are cost effective, flash drives may be an excellent solution.

In lesser environments, caching solutions may be more cost effective. Caches dynamically adjust to changing workloads, holding the hottest data at any given point in time. This may provide the optimal solution when resources are scarce.

Maximizing Your Sales with Microsoft Dynamics CRM 4.0

Maximizing Your Sales with Microsoft Dynamics CRM 4.0I don’t know whether anyone still reads books anymore but I’d thought I would recommend one.

Maximizing Your Sales with Microsoft Dynamics CRM 4.0

The book explains, in concise, easy-to-understand language, how to get the most out of this helpful crm software. Topics like working with contacts and accounts, managing opportunities and schedules, writing letters, sending emails, running reports, and more are explored in-depth. With this book and Microsoft Dynamics CRM 4.0, you’ll also be able to perform numerous other functions that will speed up your workflow and keep customers happy:

You’ll learn to:

  • Create price lists and discount codes to manage pricing
  • Generate quotes, orders and invoices for sales
  • Track competitor information
  • Automate correspondence using Outlook and Word
  • Create marketing lists of leads, accounts and contacts
  • Track the effectiveness of your marketing efforts
  • Much more!

I would also like to mention that Edward Kachinske attended our 6 day Microsoft CRM boot camp; this is what he had to say about us “DG, the CRM Practice Manager from Unitek, was a great resource while writing this book. He’s one of the premier Microsoft CRM trainers, and if you ever get the opportunity to take a class from him do so.”

To purchase this book visit amazon.com

For more information on an upcoming Microsoft CRM Boot Camp please click here.

Cisco and Voting

Well the time has come at last.  It is Nov 4th.  Election time.

Don’t forget, the most important thing is to get out and vote.

Cisco or Microsoft, it doesn’t matter, as long as you vote.

Last week we had a guy on one of our classes called Joe.  He wasn’t a plumber, and he wasn’t THE Joe the plumber.

He was CCNP and MCSE.  He was obvioulsy choosing both sides!

This is a great certification combo in the current market, as it covers the desktop and infrastructure requirements.

He was next choosing whether to go for CCVP or CCSP bootcamps.

With the introduction of the new CCNA Certifications becoming Cisco certified has been made easier.

You can aim for CCNA-R&S, then go CCNA-Sec/CCNA-Voice/CCNA-Wireless.  Then aim for the next level exams.

CCNA-R&S leads to CCNP
CCNA-Sec leads to CCSP
CCNA-Voice leads to CCVP
CCNA-Wireless currently has no ‘NP’ level equivalent, but rumours abound that Cisco will be launching a ‘CCWP’ program within the next 2 years.

Good Luck, and VOTE!

New Features of Server 2008: Server Manager

Hey Guys!  So continuing with our research on Windows Server 2008, I wanted to highlight just a few more features:
I have experienced that Server Manager in Windows Server 2008 provides a single source for managing a server’s identity and system information.

Server Manager makes server administration more efficient by allowing administrators to do the following by using a single tool:
–    View and make changes to server roles and features installed on the server.
–    Perform management tasks associated with the operational life cycle of the server, such as starting or stopping services, and managing local user accounts.
–    Perform management tasks associated with the operational life cycle of roles installed on the server.
–    Determine server status, identify critical events, and analyze and troubleshoot configuration issues or failures.
–    Install or remove roles, role services, and features by using a Windows command line.

We should also get used to the new terms like roles and features.  Roles could add more functionality like DHCP, DNS, whereas Features can augment the functionality of installed roles, like Failover Clustering, and Group Policy Management.