Tag Archive | "Best Practices"

4 Personas of the Next-Generation CIO – IT News

Tags: Best Practices, Business Alignment, Business Service Management, CIO, Transformation, Trends


The Hub Commentary

The role of the C”I”O continues to evolve.  And the “I”s have it!  Infrastructure, integration, intelligence and innovation will need to be the focus of  next-generation CIOs.  As the article states, this is a year of innovation and the re-alignment of IT resources.  The shift is from keeping the lights on to growing the revenue with new products and services.

This impact IT operations in that they have to be ready to support and analyze the mission critical and need to automate the routine and mundane.  What I find most astounding in the article is as usual, Innovation comes in 4th.

5-10% of budget is allocated for innovation and growth where >70% of the budget goes first to infrastructure and just keeping the lights on.  This is the shift that needs to occur by optimizing and leveraging technology for integration and automation shifting the spend on infrastructure down shifting the spend on innovation and growth to 30% or better.

Innovation and growth has to come to the forefront this year and stop being the afterthought or the nice to have after everything else is done.

Randy

While next-gen CIOs will emerge from traditional technology backgrounds as well as business-leader backgrounds with technology expertise, the report says, current CIOs will need to master four emerging personas in order to compete in the new environment.  Read more

Business Service Management Implementation Best Practices – or NOT!

Tags: Best Practices, Business Service Management


Here is a humorous take on on a top 10 Implementation Best Practices, we’ve all been there and we hope you see the humor and best practices as a result:

  1. Encourage nearby people in the field of “Sales” to set project deadlines for the implementation. We find their ability to process complex project dependencies to be at its highest on the golf course, perhaps even the 19th hole.
  2. Ensure that project documentation is either non-existent or so abundant that no one could possibly read it all. Either one of these scenarios allows project team members to think freely and creatively. Having an adequate level of documentation causes creativity and brilliance to be stymied. Inevitably, each individual’s efforts will come together in a magical fashion. Count on it!
  3. When looking at the System Development Life Cycle steps (Analysis, Design, Construction, Testing, Deployment…but everyone knows that), skip Analysis and Design. They cost money and paperwork is boring. It also causes a natural peer review culture to develop and we all know that we should never critique each other’s work.
  4. Do not start using the solution until every single possible risk, concern and issue has been mitigated to its worst case scenario end. Just because a significant earthquake hasn’t hit the Mid-Eastern United States in the past 50 years doesn’t mean it’s not going to happen next week. Spend A LOT of time and money worrying about this and planning for it.
  5. Do not solicit feedback from your eventual end users or “customers”. It’s usually hard for them to speak in the exact same semantics/terminology that technical people are familiar with so it’s often more expedient to just imagine and assume their requirements.
  6. Ensure that the project has zero intermediate checkpoints and zero deliverables. Checkpoints and deliverables seem to have the same negative impact as documentation – see #2.
  7. Methodology Smethodology!
  8. When customers request something new, especially when it is an unusual, one-off request, always, ALWAYS say YES!
  9. When visualizing how you will manage the scope of your implementation, imagine a youth league soccer match. Many of the players are focused on the ball, which keeps squirting away from the pack but they keep after it and all try to kick it at the same time; Sullivan is picking weeds; Aidan is running away from Sally since she wants to hold his hand. But everybody keeps playing… even when the ball goes out of bounds or when the time expires. Everybody has fun and everybody wins!

Ok, I concluded before I reached 10.  Lining up with the objectives of the business is the fatal flaw of IT that we are at a tipping point to change this year with the service provider explosion and cloud computing as catalysts.  Keep one eye open as you look over your shoulder to those who know how to measure business services.

Kevin

 

Service Level Agreements: Why are they so hard to track? Just do the math!

Tags: Availability, Best Practices, BSM, Business Service Management, Service Level, Service Value


I have worked with many customers to track service level agreements in their BSM implementation. I can honestly say that there is only one thing that all of the projects had in common: they were extremely difficult.

Now, I was usually called in mid way through the implementation when the decisions had already been made and the schedule was looking impossible. Or even worse, I would become involved after the implementation had been put in Production and the mistakes were already made.

So why are SLAs so challenging to track and manage?

  • Have you seen the contracts? In general, I don’t like contracts. I’m not a lawyer, and let’s face it, they can be difficult to decipher. With SLAs, the first thing that needs to be done is take the contract and figure out what exactly was promised. Then determine what underlying data should be used for the calculations. Then figure out how to get that data from the IT devices and put it all together for the service. These steps are crucial to success, and must all be done before implementing the SLA solution.
  • It’s just (total time – downtime)/total time… Saying that a service needs to be available 99% of the time during peak hours is easy. Determining the actual availability key metric is more challenging. You need to determine what exactly constitutes an outage, set up calendars for peak hours, and determine any outages that shouldn’t count (should 1 second of downtime count?). The math for simple availability isn’t difficult, but accounting for all of the necessary factors…well, that is more complex.
  • So many numbers…so little time. Since computers have existed, engineers have worked tirelessly to optimize performance. There are limitations to what software can do. One must think about the amount of data to be stored and calculated. For instance, if the data for availability is being stored every minute, and the report shows the last two years of availability metrics, oh, and also real-time metrics, this report is going to take some time to calculate and display the results.

These are the main three challenges I see when working with SLA implementations. Now how do we solve these?

  1. Know the data before starting. This sounds like a simple task, and most people think they have a good understanding of all of the underlying devices, metrics, relationships that go into defining the service and the key metric for their SLA. No one would want to start implementing a SLA project without knowing all of the ins and outs. Or would they? People often start modeling their services and tying services to SLAs before all of the underlying infrastructure is in place. A thorough understanding of where this data will come from (monitoring software, trouble ticket systems, back-end databases) is critical because the calculation can change due to the type of data.
  2. Determine what details can alter the key metric. Like I mentioned earlier, calculating availability is not difficult. However, determining the total time and downtime can be. Take into account the time periods that determine maintenance. Is there a weekly maintenance period? What is “on time”? Also, what sort of data can be ignored? Are there certain outages that do not affect the service’s availability? Don’t be too generic…try to figure out all of the details that contribute to the SLA’s key metric.
  3. Be realistic when creating reports. The dashboards or reports are what we really care about. We need a way to show how the SLAs are tracking. We need a nice way to get a quick visual on what might have failed or what is on its way to failing. Putting 1000 services on a single page is probably not the way to go. Let’s also not reinvent the wheel. If your organization has been calculating SLA metrics for years in an external program, use that data. Why spend the extra time to set up the lower level data to feed into a program that is going to do the same calculation?

Tracking and managing Service Level Agreements will continue to take time and effort. It requires buy-in from many different departments and resources, but BSM should and can simplify an SLA implementation.

Snow storm knocked out power

Tags: Best Practices, Business Service Management, IT Management, Service Providers


We had a snow storm the other day, during the night we lost power and by morning, the house was cold.   I was able to get on the internet with my laptop which has a cellular wireless connection as well as my cell phone.   I checked some emails for work purposes and then went to the electric companies website to see if there were any details on the power outage in my area.   I found a link on the home page which lead me to an up to date map on outages and details on the outages.   From this map I was able to find my location on the map, click on the red circle and get estimates on when the service (electricity) would be restored.

It is nice to see a service provider that leverages technology, by them having an up to date website with details on the services they are providing to me, I will not be calling into them to speak to a representative for an update.  Granted, for this type of situation the dial in method would probably have some very long generic recording, just not sure I’d be interested in listening to the whole thing before hitting buttons to get directed to a human being.

The moral of the story, having dashboards for customers to log into to check on the services a service provider is providing was at one time a differentiator, it is now an assumed feature of the service being provided.     The same is true for the IT department.  I want to log into a dashboard to check on my active helpdesk tickets and planned maintenance and active outages (with updates on when they will be resolved).

Business Service Management is the methodology for gathering all of the details and populating the dashboard for the customers.

End user of Business Service Management

Tags: Availability, Best Practices, Business Service Management, Integration, IT Management Tools


As an end user within an organization, I require a dashboard that I can log into from time to time to see the Services that are offered to me and the health of the services that I currently am using.   There are pieces of this that fall into the Service Catalog arena, but in the end, these services need managed.

The Service Management console needs to be able to slice and dice the infrastructure into the components of the individual services being provided to the end customers.  It should provide a view based on the role the person plays within the organization.  As an end user, I should see the services I can sign up for and the services I am already signed up for.  As a manager, I should be able to see the services that my team is using and the availability of those services.

End Users do not and should not be required to know the servers, routers, NAS, etc supporting a particular service.  To them, it is EMail, CRM, Timesheet and a slew of other Service offerings.  The IT group needs to manage the services in the same way.   When users open tickets, it’s on the service, not the technologies supporting the service.  Business Service Management makes the focus of the management on the Service and the technologies supported them.

Tobin

Cross Management System Integration

Tags: Best Practices, BSM, Business Service Management, Integration, IT Management, IT Management Tools, NetworkWorld


In order to understand the overall health of a service, it requires several management tools such as network monitoring (device up/down, switches, etc), device (CPU, Memory, disk) monitoring, application (logs, SNMP, etc) monitoring, etc.  The common approach is to use more than one tool such as EMC SMARTS, IBM Netcool, BMC Patrol, home grown monitoring tools, etc.   Since there are now multiple tools and multiple consoles with health and availability information, the next common approach is to set up some type of cross product integration like forwarding SNMP events out of SMARTS and Patrol into Netcool.   This is a great approach for integrating management tools into a single console, but the problem is, not everything fits (IE: Network Maps in SMARTS, Knowledge Modules from Patrol).

SMARTS does a great job discovering the network, it provides more than up/down events, but most of that information is not sharable to Netcool via an SNMP forwarding mechanism.  Patrol has the same challenge, not everything is going to fit inside of Netcool.  This is not a negative against Netcool, this is the common challenge with all event managers.  Netcool is providing a lot of value around things it is monitoring directly as well as event correlation, de-duplication, etc.

Forwarding events from one management system into another tool provides value, it should not stop there.   A single unified console with a Service Catalog type of view into the infrastructure with direct and indirect feeds from the management tools is the approach for end to end management.   In the end it places all of the important details at the tips of the Operators fingers and in turn reduces the complexity, knowledge required, multiple tools, etc and in turn you should be able to reduce downtime.

Forwarding events from one system to another is not integrated.  Bi-directional interaction (IE: able to receive alarms/events and perform actions such as Ack and Close on them) between the tools as well as an ability to leverage more than just the alarms/events is important.  Don’t settle on event forwarding, leverage the investment you made with the other tools.

Tobin

ITIL will be the end of ITIL

Tags: Best Practices, Business Service Management, CIO, Cloud, IT Management, ITIL, Performance, Service Value


Service Value is the Next Generation!

Time for ITIL.  Year-end is always a good time to watch the predictions fly and I too had to add 4 to my friend Richard Whitehead’s post with his top 10.  In my post I made a prediction that this would be the year that ITIL begins to wane as a topic of conversation for a couple of reasons.  Last week reinforced one of those beliefs with a CIO article I commented on, “ITIL versus the Cloud:  Pick One”.  Even this week, I still cannot wrap my head around this article and this is where my reasons for ITIL waning is deeply rooted.

First, I will mention I have been ITIL Foundation certified 2 times (version 2 and version 3) and was around in the service desk world during the days of all the little books for version 1.  Yes, I started working when I was 12.  Seriously, ITIL is prescriptive, not a remedy.  It is guidance about processes, not a mandated set of principles by which to govern an IT organization to the point of suffocation.  I have done my fair share of consulting and once folks get past the hurdle that it is merely advice, adapt it to what makes sense in their environment without suffocating, then the sessions are more successful.  However, I find that ITIL is one of those topics I would rather not discuss in mixed company, much like politics or religion, as it brings out very passionate debates and you need to know the party lines to consult.

Why do I believe it is really waning, it’s an inward facing operational process that is expected, it is a consulting and training industry.  Business doesn’t want to hear about how you do your job, just that you do it, do it well and do it efficiently.  It’s like this, as consumers we take our cars to the shop for routine maintenance and fixes when things break and what we are most interested in is how much will it cost and when will it be done.  We don’t really care what processes and procedures the mechanic follows or how many certifications he/she has to perform the job, we just care he/she can do the job, do it well and cost efficiently.  So why is it as IT professionals we go to work everyday and speak jibberish to our business colleagues?

I would find it a safe bet that your business would love to hear you come in and tell them exactly how implementing an updated service, a new technology, or automating a process will reduce XYZ cost, will drive XYZ additional revenue and this is how we will monitor Service Performance mitigating risk of service impacting events.  Just as this article implies, ITIL is very silo focused still.  Sure version 3 speaks of a service lifecycle, lifecycle not so much on the service performance and value to the business.

Here is a picture to help explain, click on it to enlarge it.  ITIL is going through a process maturity as well.  Here are the stages:

  • version 1 – Processes Identified as individual processes
  • version 2 – Processes Integrated
  • version 3 – Process Lifecycle
  • version 4 – still to come and already too late

I’ll end with this, I’m not against ITIL if it is taken as a prescription, training, examples.  However, when it becomes the way we speak to our business and customers and our sole focus, I find it like new technology for the sake of technology.  When I read articles about How to Justify an ITIL Project to the Business, I know the service providers are knocking on your door to take over your data center because they speak in terms of creating efficiencies, saving money and driving revenue.

We have lost all focus when we start speaking in terms of a new technology not fitting ITIL and choose one.  In that situation, I’d leverage new technology and work to service enable it to manage it, control it and communicate the value it is delivering to the organization over ITIL as it is, again, merely training, prescription and examples

The only thing I can guarantee is change and technology evolution are sure things.  As technology professionals, we must be seeking automation, technology and methods by which we can start communicating to our business in their terms.  Communicating Service Performance, Growth and driving Value into our businesses is the focus of those that lead their industries with technology.

I believe the explosion of cloud computing and the service provider market will be the catalyst described by Nicholas Carr’s article, “IT Doesn’t Matter”.

I know this controversial, give me your thoughts!

Michele


Insights from “Operationalizing Cloud” Research – EMA Blogs

Tags: Best Practices, Business Service Management, Cloud, CMDB, EMA, Trends


The Hub Commentary_

Great post by my friend Dennis.

Michele

___________________

EMA has just collected some new data regarding how IT organizations are seeking to assimilate cloud services from a top-down, service management perspective.   (Read Full Article…)

Top 10 reasons a CMDB implementation fails

Tags: Best Practices, Business Service Management, Change, CMDB, CMS, Configuration, ITIL, ITSM, Trends


Below are some of the common reasons that CMDB implementations fail.   They are in no particular order.

Lack of Management Buy-in

Face it, one group is going to be the buyer and installer of the CMDB, there are many other groups/departments that will be needed to help maintain the data as well as use the data.  If there is no edict to leverage ITIL processes, there is a good chance that the CMDB project will fail or more accurately… not get used.

Owner of CI’s do not have easy access

I’ve seen several times that the change management team/group are the buyer/install/owner of the CMDB.   There is nothing wrong with that, the problem comes in that they do not have buy in from the CI owners to help maintain (or validate) the CI’s, or the CMDB solution is cumbersome and it is implemented in a manner that makes it hard for the CI’s to be maintained.  The Change Management team doesn’t want to own the CI’s (and can’t/shouldn’t), but the owners are not able to easily access the CMDB.

Garbage in, garbage out (and/or stale data)

There are lots of sources of data to populate and maintain the CMDB, exporting XML from one system and importing into another system is only part of the process of ensuring data accuracy.  XML exports are not the only ways to integrate with other sources also.   Make sure the vendor has ways to filter out noise (who cares about an SSH session from an admin workstation to the server, it’s not a dependency).  If the there is to much data, it may be hard to find anything, if there is inaccurate data, no one will trust the CMDB.  Find the middle ground.

Lack of third party Integration

There are many reasons to connect to the products to pull in additional details.  You can think of some of these applications like mini silo CMDB’s.  The HelpDesk system knows anything and everything about customers, the asset system knows tons-o-things about servers.  Integrating with different sources is a great way to get started as well as ongoing maintenance of a CMDB.

100% or NOTHING

Do not fall into the trap of holding back releasing the CMDB to the company until it is completely done.   I understand that there needs to be a certain level of data witin the system before there is value, I understand that there needs to be processes in place to maintain the data and then there is the accuracy challenges.  The point is, pick a few slices of the entire pie, define what it is, set the expectations, roll it out, get some internal wins (and learn from it), then go after a few more slices of the pie.

Hard to search/find things

The interface must be intuitive, the end users shouldn’t have to understand a database schema in order to search for CI’s.   Many of the users will only log into the CMDB a few times a year.  A user should be able to hit some internal website, get forwarded to the CMDB interface, issue a search, press print and run off to their DR planning meeting (or Solaris migration project, etc).

Over designed/engineered Schema

For those doing a roll-your-own CMDB, good for you, it is nice that you are spending time to design the database schema and planning for the future… don’t get stuck planning for 2020, your plans for the CMDB and schema WILL NOT BE ACCURATE, accept it.

One Stop Shopping

We are looking for a CMDB, this is a good time to purchase a new Change Management System, Problem, Help, etc, etc, etc…. and you have just delayed purchasing and rolling out anything for the next 18  – 24 months between the pilots and lengthy executive signoffs due to costs and implementation time frame.   Again, good idea, they need to work together in harmony, you need a plan, you need interoperability, but you also need to solve some business problems sooner.

Bottom Up = WRONG approach

If you’ve ever talked to the builders or owners of a CMDB, many times it quickly gets down into the weeds of attributes, relationships, types of CI’s.  This is all interesting information and details but… who cares.   In the end, who is the target audience, what is it that they will need to get out of the CMDB.  Take a top down approach to the implementation.  If you have a clear vision (or atleast a goal of a vision), in turn it can clearly define the types of CI’s you will initially need, potentially the specific attributes and dependency information.   It probably help you determine what types of integrations the CMBD might need with other system in order to populate and maintain the CI’s.  If you take a bottom up approach for the implementation of the CMDB, you will get stuck in the weeds and you may not have a clear answer if the design/approach/solution/product/etc will meet the end users vision/goals.

Okay, for those of you not counting, I only listed 9, in the comments below… give me your 10th one.  Don’t be shy, share a 10th one or a funny story about one.

Tobin

IT Transformation Begins Today – Resistance is Futile

Tags: Best Practices, Business Alignment, Business Service Management, Cloud, IT Management, Service Providers, Transformation


The conversation IT has with the business must change this year and this is the year of transformation predicted back in 2003 by Nicholas Carr in his Harvard Business Review article, “IT Doesn’t Matter”.  The cloud, virtualization and the growing number of service providers with as-a-Service options are the catalyst that will force this transformation in market time during 2011.

If you have kept up with my news posts this week, I swear I was unable to find much news to comment and post on that did not have to do with the service providers and transformation.  My favorite post of all is the CIO article, “ITIL vs The Cloud:  Pick One”, REALLY?  You are kidding right?  This and another post, “Consider the Cloud a Solution, not a Problem” are exactly the headlines and mentality that will send IT jobs To the proverbial Cloud, just as Nicholas Carr predicted.

The way we manage technology and our processes today should not hold us in the past.  Amazon is doing it again.  Amazon changed the industry from bricks and mortar retail to online, almost overnight.  Transform or die, it happened and is happening again.  Amazon is offering infrastructure as-a-Service, purchasable on a credit card.  Now let’s start watching the leaders in each industry flip flop based upon those who embrace new technology, agile development AND have the foresight to service enable their workloads instead of dissing and complaining about what and how much monitoring Amazon should be responsible for.  IT is responsible for measuring and communicating service performance, instrument your workloads and inject them with the intelligence required to communicate service performance.  These will be the transformational leaders of tomorrow.

Communicating service performance is on IT, are you making the transformation?

Michele

The multi-layer Service Catalog

Tags: Best Practices, Business Service Management, CMDB, CMS, IT Knowledge Exchange, IT Management, ITIL, ITSM, ITSM Solutions, Service Level, Service Providers, Trends


I ran across this article the other day by Doug Mueller and it reminded me of the multi-layer Service Catalog.  I’m not sure if this is an actual term or not, but it’s a good description of what it is.  If you take a very large organization that is broken up into distinct areas such as the teams that support:

  • Hardware & Operating Systems
  • Technologies (web servers, databases, messaging bus, etc)
  • Applications (email, timesheet, payment processing)

For mature IT groups, they typically will drive towards a list of supported hardware and support operating systems, they will also typically drive towards a list of support technologies that will be supported within the environment.  On top of these, some common applications (or services) are then provided to the employees such as email and the corporate web server.

If you walk through this, each of those teams has their own Service Catalog (and as Doug said, a Service Request Catalog).   Someone in the technology area, after significant research wants to make this new technology available for sharing documents.  The person requests from the hardware group for hardware and an operating system to be provisioned for this technology to run on.

Someone in the application area then decides to tie the document sharing, email, web and video together for a collaboration solution, so they in turn request services.  The end users then request access to the collaboration service.

I have seen a few different approaches to this such as different links on the internal website to request hardware w/OS and another set of  links to get applications/technologies installed to simple help desk requests.   Regardless, while it may not be a full fledge electronic Service Request Catalog at each layer, there are lists of approved hardware, operating systems and technologies for many companies.

Tobin

Business Service Management and CMDB

Tags: Availability, Best Practices, BSM, Business Alignment, Business Service Management, CMDB, CMS, IT Management, IT Management Tools, ITIL, Service Level


So you have a console that has your Business Service Management views.   You set up the views to show the key Services you are providing to your end customer(s) (EMail, Databases,  CRM, etc).  You somehow are bringing in monitoring data in order to light up the service views in order to show some type of condition and health. You figured out how to measure the Service Levels and provide all of these details back to the end users and management in a dashboard.  The question is, how do you maintain it?

If you have been following ITIL, one approach is to integrate the BSM solution with the CMDB solution (assuming they are different solutions).   The CMDB probably has discovery populating it with new CI’s and updates to CI’s.  The CMDB should have inputs to other systems for additional details around the CI’s.   In the end, the CMDB is the location for the factiods around the Services such as all of the CI’s comprising the Service, relationships between the CI’s, current configuration of the CI’s and so on.   If those details are available, why wouldn’t you use it to drive the way in which IT is managing the environment.   As things change within the enterprise, the CMDB is updated and in turn the BSM views should auto-magically update also.

Tobin

Accidental Cloud Ldr–Stealth Cloud Followers–Which Cloud are you On?

Tags: Best Practices, Business Alignment, Business Service Management, Cloud, Service Providers


Are you leading your organizations cloud roll-out or are you reacting to it? It is happening, better to lead than follow!

The WorkloadIQ post and the article Richard references on the Stealth Cloud from a CIO article reminds me of a previous article about the Accidental Cloud Leader from a Networkworld article.  Both of these articles point to the cloud is coming, the choice facing IT organizations is whether to lead, control costs, mitigate risk, deliver quality service and manage costs or to follow with rising costs, reactive IT, high risk and poor service quality. Richard hits the nail on the head, IT is traditionally change averse and insecure with the concept of outsourcing services. Technology is evolving faster and faster and the very organization that should adopt, deploy and lead with technology continues to lag.

In almost all cases when it comes to sourcing decisions they are done to create change that an organization has difficulty bringing to the organization, not for cost reasons. Commodity functions are best suited for outsourcing, driving standards and managing costs. However, outsourcing the service does not remove accountability for managing service delivery.

Cloud providers are popping up faster than service providers during the dotcom boom days of web hosting, application hosting, etc. There are several key factors to consider as pointed out in these articles and blog posts:

  • Availability of service
  • Risk of a secure service
  • Reliability of the service provider
  • Cost of support

Availability of Service and Reliability of the service provider
The dotcom bust of service providers in the early 2000 era came down to lack of mature management processes. Many providers today are one significant outage away from being out of business. Is this who you trust your services to? Who’s managing and leading this due diligence in contracting for the services in the leader / follower scenario?

When seeking service providers, it is important to understand their management processes and capabilities. You do not want to define them, but the lack of management transparency and process indicates maturity of the service provider and their ability to delivery availibility of services. One thing to note here is not to ask for inappropriate service levels and/or penalties. Investigate their typical services, leverage the cloud and service providers for the commodity and take advantage of the economies of scale they offer.

Risk of a secure service
Security as an obstacle in going to the cloud or leveraging an as-a-service provider is, quite frankly, IT noise. As described in these articles and blogs, this is the service providers business and they know it is their number one objection. In many cases, they may offer a far more secure environment than most IT organizations and thus the rise of IT insecurity and noise. However, again, it is an area that must be investigated as it relates to the mature management practices of a service provider.

Cost of support
Organizations are expressing frustration with their IT organizations as a perceived obstacle to agility and innovation when they go to the cloud directly. As Richard’s blog points out, this costs your IT organization more in the long run to support, the service will go down, the business will call support for help, the provider most likely may not be reliable and in the worst case, data and security can be breached.

Management generally lags new technology and this cycle to go to the service providers directly for a defined service and defined cost is more appealing to the business. Management lags both with IT internally and with the service providers compounding the risk of an outage or security breach.

Providing the ability to monitor, manage and measure technology services both internally as well as the performance and availability of the service provider insuring quality service delivery will be key. Service enabling your infrastructure could not be easier today and would provide the control with agility your organization is screaming for from your IT organization. Management does not have to be an afterthought and the right platform can future-proof your services with technology adoption agility, it merely takes some proactive planning.

Check out these articles and then answer:  Are you following or leading your organizations cloud rollout – it is happening and coming . . . Are you Stealth or Leading? What are your challenges and concerns?

Consider the Cloud as a Solution, Not a Problem – Forrester Blogs

Tags: Best Practices, BSM, Business Alignment, Cloud, Forrester, IT Management


It’s rumored that the Ford Model T’s track dimension (the distance between the wheels of the same axle) could be traced from the Conestoga wagon to the Roman chariot by the ruts they created. Roman roads forced European coachbuilders to adapt their wagons to the Roman chariot track, a measurement they carried over when building wagons in America in the 19th and early 20th centuries.  (read more…)

The Hub Commentary ___

I like this post by my friend JP, it brings a whole new meaning to “being stuck in a rut”.  I ask, what is the point of new technology if we cannot use it as designed?  Think about when you are are purchasing a product or service and are greeted with “the system doesn’t work that way”.  Generally, I do not care how the system works – I just want to buy something, exchange something and not be bothered with what it takes someone to perform the task.

So I always find it curious as IT professionals why we impose these obstacles and roadblocks to progress in supporting our business.  Another news piece by one of JP’s colleagues also makes the point of why start-ups are successful, they meet the customer requirements.

Markets dynamics and business requirements change – how as IT do we use new technology to craft innovative solutions as JP says and operationally figure out how to monitor, manage and measure it.  The service providers know how to do this and will feast in 2011 on the IT organizations that do not become agile to market dynamics and business requirements.

Michele

Apply a “Startup” Mentality to Your IT Infra & Ops – Forrester Blogs

Tags: Best Practices, BSM, Business Alignment, Business Service Management, Forrester, ITSM, Service Level


Cash-starved. Fast-paced. Understaffed. Late nights. T-shirts. Jeans.

These descriptors are just as relevant to emerging tech startups as they are to the typical enterprise IT infrastructure and operations (I&O) department. And to improve customer focus and develop new skills, I&O professionals should apply a “startup” mentality. (read more…)

The Hub Commentary ___

I find this a great analogy for IT organizations having sat on both sides of the fence.  I was once an IT application development & support systems analyst.  I remember working on my first client server application (I know that dates me) and meeting with a technical engineer from a start-up software company who had come to help me with some pointers on using the tool for my project.

One of the first things we discussed was why I wasn’t directing the application I was developing directly against the database server.  My response, “because if I use the database in the application, then all the users will have to have access/licenses and that comes with a fee.”  So let’s think about this, they are paying you to develop an application, drop a spreadsheet of numbers into a database you paid for, then extract it again to display in this tool that you paid for.  Yeah.  Dilbert cartoon in the making!  Incredible silence fell as we both knew we were spending more money to avoid a license to the database than if we bought those licenses and used the technology and application I was developing.  Dilbert!

That was 20+ years ago (yikes) and as IT organizations we are often a penny wise and a dollar short.  2011 will no doubt be a tipping point for the service providers with business frustration at an all time high and access to new services prevalent.  How an IT person answers this question is a telling statement to me, “What business are you in?”.  If the answer is “I am in desktop support”, I know it is an inward facing IT organization.  If the answer is “New drug development and I support keeping the scientists in R&D working racing against the clock to get formulas to the FDA to be first to market”.  I know this is a business driven organization and there is no business ‘and’ IT.

Think like a start-up – technology is a solution to innovation, not a problem to support and operate

Michele


The Essential Elements of a Private Cloud – Cloud Computing Journal

Tags: Best Practices, Business Alignment, Business Service Management, Cloud, Cloud Computing Journal


Not long ago, Forrester analyst James Staten wrote a report with the compelling title: You’re Not Ready for Internal Cloud.  (read more…)

Cloud computing will drive Business Service Management practices.  The first point of pushing things to the cloud that are standard is discussed in one of my previous blogs.  Don’t keep in-house that which should be outsourced and don’t outsource that which is so unique to your business.

However, remember the management of the private and public cloud as services, service levels and performance monitoring.

Reshaping IT Management for Turbulent Times – McKinsey Quarterly

Tags: Best Practices, Business Alignment, Business Service Management, IT Management, McKinsey


Despite decades of increasingly intensive use of information across industries, IT has remained a black box for many executives. Too often, the link between spending and performance has been unclear, if not problematic. As a result, leaders felt that their only course of action was to hire a competent CIO, throw increasing amounts of money at IT, and hope for the best.  (read more …)

7 Things You Need to Build a Cloud Infrastructure – PCWorld

Tags: Availability, Best Practices, Business Service Management, Cloud, IT Management Tools, ITSM, PCWorld, Service Level


Today, service providers and enterprises interested in implementing clouds face the challenge of integrating complex software and hardware components from multiple vendors. The resulting system can end up being expensive to build and hard to operate, minimizing the original motives and benefits of moving to cloud computing.  (read more…)