Saturday, 17 September 2011

Cloud Computing : adoption issues


Cloud Computing : adoption issues


In my previous post about ” Cloud Computing” , I tried to explain my understanding about cloud computing so far. In this post I am trying to highlight some of the important points and concerns , from various research studies, that need attention while planning for a cloud based services for any organisation.

Issues with Cloud adoption

There are three types of potential users of cloud computing services: consumers, small organizations, and medium to large organizations. Consumers and small organizations have relatively simpler requirements for adopting a new technology than medium to large organizations, and have much less to lose if the adoption goes awry. There are at least seven types of adoption issues for cloud computing . They are
Outage (availability)
Security
Performance
Compliance
Private clouds
Integration
Cost
Environment

Of these, outage, security, and performance are quality of service (QoS) issues. Only some of the adoption issues matter to consumers and small organizations. However, all of them are of concern to medium to large organizations.

1. Outage

An outage may be temporary and permanent. A permanent outage occurs when a cloud service provider goes out of business. This has happened, and will happen again. Temporary outage of cloud computing services appears to be inevitable. It may happen several times a year, and each time it may last a few hours or nearly one full day or even longer. When large cloud services become unavailable, there is a nearly instant and worldwide coverage of the outage. Such services as Amazon, Google, Citrix, etc. have experienced highly publicized outages during the past couple of years.

The users of a cloud service should exercise prudence, and take one or more of the following precautions. First, they should not entrust absolutely mission-critical applications and data to the cloud service providers; that is, they should use cloud services for non-mission-critical applications and data. This explains the current uses of cloud services for Web site hosting, software testing and online data backup. Second, they should keep backups of applications and data on on-premises servers and storage, or on a secondary cloud service. Third, they should secure as favorable a service-level agreement (SLA) as possible from the cloud service provider for a favorable partial redress in case of temporary outages. We note that none of these precautions is entirely satisfactory. The first limits the use of the cloud service. The second and third erode the cost advantage of the cloud service. The third never fully compensates for the actual damage.

2. Security

The security of computer systems, and the data stored on them, can be compromised in so many ways, 100% security is simply impossible. Sophisticated hackers can break into just about any computer system. A cloud may become a “honey pot” that attracts hackers. Accidents may happen during physical transportation or electronic transfer of a large volume of data. Dishonest staff members may do bad things to the computer system or data. We believe, however, that the clouds are not less secure than on premises computing systems. There is no reason that the best security technologies and processes that can have been adopted for on premises computing systems cannot be used by the cloud service providers.

Further, the effects of security breaches on the cloud service providers are as great or even greater than those on medium to large organizations. As such, the cloud service providers should be highly motivated to do their best to secure their servers and data. The security measures that Amazon Web Services employs may serve as a model for other cloud service providers. The user can run his customized machine image with a full root access. The user can have his own ingress firewall. The user can also have granular access control for every file. The user needs CERT, a public key, a long ID string, a security ID to access Amazon’s resources.

3. Performance

A major source of performance problem for cloud services is the communication time between the client computer and the Web server in the cloud. This problem becomes serious as the number of simultaneous users increases, and the amount of data transferred to and from the cloud increases. Even the physical distance between the client computer and the cloud makes a difference. Sometimes organizations discover the need to substantially increase the communication bandwidth shortly after adopting cloud services. Before adopting cloud services, organizations must assess the communication bandwidth requirements, and evaluate the performance behavior of the applications with respect to transfer of large amounts of data. Another source of performance problem is the inability for the service provider to scale up their computing infrastructure as customer demands increase beyond the original expectation. Before adopting cloud services, organizations must understand the service provider’s capacity assumptions and scale-out plans regarding the computing infrastructure.

4. Compliance

In most of the countries, enterprises are subject to some government regulations regarding the secure storage, privacy, and disclosure of data. These regulations were written without consideration of cloud computing, that is, an enterprise storing data on a third-party computing facilities that are shared with other enterprises. It is not clear if cloud computing will violate such regulations.

5. Private Clouds

A private cloud is an on-premises cloud. A private cloud, except for its physical location, works just like a normal, or public, cloud. The virtual machines and storage are created by virtualizing physical computing resources; and the virtual computing resources are dynamically allocated and deallocated based on the needs of the users. Further, the users or departments in the enterprise are charged for the services they actually use.

Since the term “cloud” was coined to refer to a remote third-party service provider, the term “private cloud” is an oxymoron. Besides, one of the primary motivations for cloud services has been touted as the freedom from having to administer on-premises computing resources. A possible justification for the use of the term private clouds is the fact that a private cloud is envisioned as a central cloud for an enterprise, and it is to be accessed by users in different departments, as though it is a remote computing resource. In any case, the concept of private clouds has gained grounds recently.

Private clouds can serve as a halfway step before the adoption of public cloud services. Enterprises can gain experience using cloud services, and prepare their IT infrastructure and staff properly. Further, enterprises can make use of hybrid cloud services based on their private clouds and some public clouds. For example, when the capacity of the private cloud is exceeded, the enterprise may tap into the public cloud. There are adoption issues for hybrid cloud services. Today, if a workload is to be moved from a private cloud to a public cloud, both clouds require the same hypervisor, the same chipsets for the servers, and the same file system. Further, virtualization vendors have different virtual machine formats. To alleviate this problem, the Distributed Management Task Force has proposed an Open Virtual Machine Format.

6. Integration

Since organizations may need to adopt multiple service providers to various reasons, they need to integrate applications and data on multiple public clouds. Further, many organizations are likely to adopt hybrid clouds, they need to integrate applications and data between the private clouds and the public clouds. Technologies such as enterprise information integration (or federated database systems), enterprise application integration, and enterprise service bus can be adapted to address the cloud integration issues.

7. Cost

Cost is generally not regarded as an adoption issue. People take the “only pay for what you use” part of the marketing definition of cloud computing as a given. In the 1980s and 1990s, people took for granted about the promise of cost savings in outsourcing software development. The cost savings, while still significant, turned out to be much less than had been presumed, because of the need to communicate between the two parties (e.g., travel, stationing staff), to re-do work that was not done properly, gradual increase in fees charged, etc. Similarly, the promised cost benefits of cloud computing are bound to be eroded. As observed above, the need to maintain onpremises backup or secondary cloud services in order to cushion the impact of occasional outages certainly adds to the cost. The need to increase communication bandwidth to maintain a desired performance level adds to the cost. Further, the “remote administering of computing resources” part of the marketing definition of cloud computing does not mean that organizations that adopt cloud services can totally depend on the service providers for the administering of the applications, virtual machines and storage. The organizations still need to monitor the performance and availability of the virtual computing resources.

There are various monitoring tools, both commercial and open source. Monitoring requires staff time, and possibly commercial tools. These add to the cost. IaaS service providers create virtual computing resources out of physical computing resources, and allocate the virtual computing resources to different users. This means that multiple users share common physical computing resources. Some organizations insist on having dedicated physical computing resources in the cloud in order to prevent other “tenants” from possibly crossing paths. Use of dedicated physical resources in the cloud can substantially erode the cost benefits of cloud computing.

Cloud Computing – It’s not just another buzzword, but a near future


Cloud Computing – It’s not just another buzzword, but a near future



WHAT IS CLOUD COMPUTING?

Google, Yahoo, Amazon, and others have built large, purpose-built architectures to support their applications and taught the rest of the world how to do massively scalable architectures to support compute, storage, and application services.

Cloud computing is about moving services, computation and/or data—for cost and business advantage—off-site to an internal or external, location-transparent, centralized facility or contractor. By making data available in the cloud, it can be more easily and ubiquitously accessed, often at much lower cost, increasing its value by enabling opportunities for enhanced collaboration, integration, and analysis on a shared common platform.

Cloud computing can be divided into three areas:

SaaS (software-as-a-service). WAN-enabled application services e.g., Google Apps, Salesforce.com, WebEx
PaaS (platform-as-a-service). Foundational elements to develop new applicationse.g., Coghead, Google Application Engine
IaaS (infrastructure-as-a-service). Providing computational and storage infrastructure in a centralized,location-transparent service e.g., Amazon

Enabling technologies. The following precursor technologies enabled cloud computing as it exists today:
SaaS
Inexpensive storage
Inexpensive and plentiful client CPU bandwidth to support significant client computation
Sophisticated client algorithms, including HTML, CSS, AJAX, REST
Client broadband
SOA (service-oriented architectures)
Large infrastructure implementations from Google, Yahoo, Amazon, and others that provided real-world, massively scalable, distributed computing
Commercial virtualization

CAPEX ( Capital Expenses) VS. OPEX ( Operation Expenses) TRADEOFF

In the past, developing an application service required a large CapEx (capital expense) to build infrastructure for peak service demand before deployment. The risk of a service’s success combined with the operational requirement of a large CapEx investment severely restricted funding. Cloud computing addresses this problem by allowing expenses to track closely with resource use, thus following income rather than having to purchase for peak capacity before income is realized. Running application services on a cloud platform accomplishes this in three fundamental ways:
It moves CapEx to OpEx (operational expense), closely correlating expenses with resource use.

It allows service owners to eliminate significant system-administration head count by avoiding the need for internally purchased servers.

It smooths the path to service scaling by not requiring the CapEx-intensive architectural changes needed to scale up service capacity in the event of service success.

Because the cost of deploying new services is much lower and expenses track real usage, businesses can develop and deploy more services without fear of writing off huge capital investments for dedicated infrastructure that may never be needed. While start-ups are more focused on cost, enterprises are equally focused on flexibility to make required service changes and achieve maximum agility. Some Silicon Valley start-ups are able to go completely without infrastructure, instead using outside services for e-mail, Internet, phone, and source control. This allows the start-up to focus all of its resources on its core differentiating efforts.

BENEFITS of Cloud computing
Large-scale multitenancy achieves significant economic advantage. Sharing the resources and purchasing power of very large-scale multitenant data centers provides an economic advantage. As an example, a major engineering services company’s current internal cost to provide a gigabyte of managed storage is $3.75 per month, while Amazon charges 10 to 15 cents per month. Initially, ISP charges at the company were $3,500 per megabyte per month. After examining the cost structure of companies such as YouTube, the engineering services company assumed that YouTube’s costs were in the teens. Taking advantage of network peering arrangements and consolidating the company’s interfaces to a place close to the ISP’s POP (point-of-presence) have brought costs down to YouTube levels.
Broad use of virtualization has also significantly reduced the company’s data-center CapEx. Prior to virtualization, server utilization was between 2 and 3 percent and total data-center floor space was around 35,000 square feet. With virtualization in widespread use, server utilization is up to 80 percent and server consolidation has shrunk the square footage of the data center to 1,000 square feet.
As more cloud service vendors become available, computing and storage will become a true commodity with fine-grained pricing models, complete with arbitrage opportunities, similar to other commodities such as natural gas and electric power. Under a cloud model, pricing is based on direct storage use and/or the number of CPU cycles expended. It frees service owners from coarser-grained pricing models based on the commitment of whole servers or storage units.
Transforming high fixed-capital costs to low variable expenses. Setting up an internal cloud within a company provides an efficient service platform while placing a limit on internal capital expenditures for IT infrastructure. An external cloud service provider can supply overflow service capacity when demand increases beyond internal capacity.
Previously, companies had to operate servers for projects even though they might never be invoked—such as servicing warranties. A cost of $800 to $1,000 per month is not unreasonable to have a server idle on the data-center floor. By moving these projects to an external IaaS vendor, those functions can be placed in the cloud and the service run only when required, at pennies per CPU hour. In this way, companies can transform what was a high fixed cost into a very low variable one.
Flexibility. For large enterprises, the ease of deploying a full service set without having to set up base infrastructure to support it can be even more attractive than cost savings. Bechtel must set up new engineering centers with very little notice worldwide. Using internal cloud IT resources, Bechtel can now set up these centers to be fully functional within 30 days.
Smoother scalability path. For application architectures that easily scale with added hardware and infrastructure resources, cloud computing allows for many single services to scale over a wide demand range. Animoto’s service started with 50 instances on Amazon. Because of its popularity, it was able to meet soaring demand and scale to 3,500 instances within three days. It is not a given, however, that all application architectures scale that easily. Databases are a good example of hard-to-scale applications—hence, the widespread use of programs such as Amazon’s SimpleDB.
Self-service IT infrastructure.Cloud-computing service models are often self-service, even in internal models. Previously, you had to partner with IT to develop your application, provide an execution platform, and run it. Now, much like Amazon, IT departments define use policies for automated platform and infrastructure services with line-of-business-owners developing applications on their own to meet those requirements.
Severely reduced disaster recovery cost. Most SMBs (small- to medium-size businesses) make no investment in DR (disaster recovery). By enabling VMs (virtual machines) to be sent to the cloud for access only when needed, virtualization becomes a cost-effective DR mechanism. Typical DR costs are 2N (twice the cost of the infrastructure). With a cloud-based model, true DR is available for 1.05N, a significant savings. Additionally, because external cloud service providers replicate their data, even the loss of one or two data centers will not result in lost data.
Common application platform enables third parties to add value. While telcos are moving to cloud platforms for cost effectiveness, they also see opportunities resulting from a common application platform. By allowing third parties to use their platforms, telcos can deploy services that either extend the telco’s services or operate independently.
Increased automation. Amazon sees automation as a significant benefit of a cloud services model. Moving into the cloud requires a much higher level of automation because moving off-premises eliminates on-call system administrators.
Release from ABI and operating-system dependencies and restrictions. Amazon also sees cloud computing as a way of releasing data centers from the need to support the ABI (application binary interface) and operating-system requirements of key applications. With EC2, Amazon provides five popular VMs to choose from: three flavors of Linux, OpenSolaris, and Windows Server. Its only concern is effectively running the VMs; it does not have to be involved with the VM’s internal operations.
MapReduce enables new services. Although not the most cost-efficient way of providing data-warehouse functionality, MapReduce’s use of a large parallel-processing resource has enabled a number of companies to provide cloud-based data-warehousing services. This frees customers from having to invest in large specialty hardware purchases for small service requirements. MapReduce is expected to enable additional service types that were once limited to dedicated hardware.

USE CASES for Cloud Computing
An international financial exchange paid for the development of a large service. It hosted data in the cloud and ran the application on the client’s desktop. All operations were on a pay-as-you-go basis. This is an example of a very low initial investment required to make a commercial service operational.
Shazam is a start-up company whose service executes on the Apple iPod. It samples songs being played on the radio, matches the songs to a library in the cloud, and returns a link to purchase that song on the iPod. It is an example of a smart device coupled with cloud-based computation and storage.
Animoto, hosted on Amazon, was able to track demand of its service and scale up from 50 instances to 3,500 instances over a three-day period.
A national newspaper wanted to place scanned images covering a 60-year period online. After being repeatedly turned down by the CIO for the use of six servers, the newspaper moved four terabytes into S3, ran all the software over a weekend on EC2 for $25, and launched its product.
A major international auto-race organizer supports special race Web sites that provide live streaming video and realtime technical information. Previously, it would retain an ISP, acquire massive server power, and hire 500 engineers to baby-sit the servers at the ISP to institute server failover manually. When it moved to EC2, the savings in server rental were not that big, but it did realize orders of magnitude in personnel cost savings.
Mogulus streams 120,000 live TV channels over the Internet and owns no hardware except for the laptops it uses. It did all of the election coverage for most of the large media sites. Its CEO states that he could not be in business without IaaS.

DISTANCE IMPLICATIONS BETWEEN COMPUTATION AND DATA

How you deal with the distance between computation and data depends heavily on application requirements. If you need to minimize expensive bandwidth, then you should find a way to keep the two in proximity. In cases where bandwidth is expensive and the distances cannot be shortened, it may make sense to download an extract of the data to work on it locally. Longer term, it would be best if developers could write the application in such a way that it would dynamically adjust its data-access mechanisms in response to the operational context (bandwidth cost, bandwidth latency, security, legal data location requirements, etc.).

DATA SECURITY

A common concern about using an external cloud service provider is that it will make data less secure. Because of the wide quality range of corporate IT security, trusting information assets to a recognized cloud service provider could very easily increase the security of those assets. Given that many corporate data centers struggle to fund, architect, and staff a complete security architecture, and that cloud service providers provide IT infrastructure as their primary business and competence, clouds could possibly increase security for the majority of their users. Moreover, 75 to 80 percent of intellectual property breaches are a result of attacks made inside the company, which would not impact a decision to use clouds one way or the other.

Some Important Conclusions from various industry experts:
IT should establish a revenue metric to show the cost effectiveness of its service-delivery infrastructure. Bechtel used the inverse of the hours it took to complete its corporate projects. By increasing infrastructure utilization, it lowered its cost per unit of output by 55 percent—increasing capacity and overall satisfaction.
Although most CIOs and CTOs are interested in seeing if a cloud-based service model is a more efficient IT architecture, initial adoption is usually bottom-up and based on pragmatic business needs. Often the CIO is the last person in the adoption chain.
In deploying services for your company, make an effort first to buy those services before building them on your own. As in all successful businesses, do not get caught up in building and supporting infrastructure that is not core to your business.
If an application is performed trillions of times per day, anything with even a remote probability of failure is a certainty. Application developers must be trained to accept failure as inevitable and design for it.
Cloud computing represents a shift in power in IT away from those who control capital resources to the users and developers who employ self service to provision their own applications.
Often IT people are not rational and will resist losing control of power and budget. This can be a significant roadblock in converting to a cloud-based service architecture. Change management can often be the majority of the effort in converting to a cloud-oriented service architecture (e.g., at Bechtel it has been around 80 percent of the effort) as it takes time for people to move outside their career and risk comfort zones. A cloud service model places traditional IT skills at risk, and those people need to be transitioned from managing the physical infrastructure (such as storage or processing) to managing IT policies and service-level guarantees.
When the load varies widely, a cloud-computing service model excels. For services that impose well-defined loads, it usually is more cost effective to make the capital investment for an internal platform (e.g., running your own Microsoft Exchange server on Amazon is not a good idea).
By restructuring its Internet services to be similar to YouTube and placing interfaces closer to ISP POPs, Bechtel was able over several years to decrease latency by 50 percent and reduce Internet charges by several orders of magnitude. Bechtel currently pays $10 to $20 per megabyte per month.

Some of the challenges
Are data-ingestion services able to take physical delivery of a large amount of media for transfer to the cloud?
Are appropriate data-location choices provided to the application so that users can comply with applicable law? Depending on the laws in force, data-location compliance can be quite complex and require sophisticated abstractions.

Cloud Computing – What does it mean to System Administrators


Cloud Computing – What does it mean to System Administrators




Most of us already know that Cloud Computing is a new Buzz word in the industry and it is very true that everyone want to learn about it as much as possible. For myself, I have been reading and observing cloud computing evolution for past one year, and recently I had an opportunity to attend for IBM’s SmartCloudCamp session which has given me some insight on current state of cloud computing evolution.

I have noticed several questions from System Admin community about the Cloud computing’s effect on Infrastructure Support Teams. In this post I am just trying to address the same question in a way that I understand cloud computing.


Cloud Computing

Let me tell you a small story before we go to discuss about t the Cloud Computing.

My Sister and her family is living in a small town in the state of Andhra Pradesh, India. In the town, the power failures are so common and it is like 1 or 2 hours of power outage with a frequency of 2 or 3 times per day. My sister and her neighbors were so upset because these continuous power outages disturbing the kid’s studies and also making life difficult during the evenings. They know that there is an alternative to solve the problem by having power generator as a backup power source but most of the neighbor families are not in a position to afford for it and also they are worried about the regular maintenance cost of these devices.

One fine day, a group of smart minds came up with a solution to purchase a high capacity power generator , place it in some common place and to provide backup power connections to every home who ever ready to pay for the usage charges as per the the actual usage calculated by the electric meter plugged in at every home. Interestingly, the idea worked very well, and most of the people in the town were adapted the backup power source with the minimum capital investment and zero maintenance cost.

I believe, by this time, you might have understood the purpose of cloud computing in IT industry. If it is still unclear, lets go forward to look at it in more detailed terms

The Current definition of Cloud Computing is ” A Comprehensive solution which delivers the IT as a Service. Here the term IT can be expanded as Infrastructure, Platform, Storage and Software”. . At present the IT industry classified into two groups in terms of cloud computing , first one is Cloud Computing Service Providers and the other one is Cloud Computing Service Consumers ( Client).


Cloud Computing in its Basic Form
Quick refresh on Cloud Computing Benefits to a Client/Consumer

1. Reduced Capital Cost to setup IT Infrastructure

Scenario 1:

If any organisation want to start a new business function that needs IT infrastructure, the organisation need not go through the all the complex process of establishing IT infrastructure starting from the Data center planning. Instead the company simply can go for a Cloud Computing service provider who is providing the kind of service , in his service catalogue, that meets the organisation’s IT requirement for the new business function. The requested service could be anything like Server/Storage/Network Infrastructure, Platform Environment or already built software application which can be customized to your requirement. And the organisation will pay, to the service provider, only for the resources that has been utilized. No Capital investment, no running maintenance cost.

Scenerio 2:

If any organisation want to migrate it’s existing IT infrastructure ( or part of it ) related to less critical business function, it can again approach the Cloud Computing Service provider for a solution that works for their actuation requirement.

2. Rapid scalability with the help of dynamic infrastructure

Current Challenge:

In any business, it is very common that, the initial design of IT infrastructure happens considering the current potential of business and expected growth of business in near future. And these expectations / predictions about the future growth may or may not be correct, in current day high fluctuating business markets. Any large Investment in IT infra setup will be wasted if the related business not doing well , as expected. And at the same time insufficient IT infra resources could block the business growth if the business was progressing better than expected.

It is always a real challenge to any organisation to predict the actual requirement of IT infrastructure , and this challenge can easily addressable if the organisation considering the cloud computing solution.

Using Cloud Computing, organisations can easily scale it’s resources to the level it matches the business requirement which is very dynamic in nature.

3. Utility Pricing Model

This point is self explanatory, organisations will pay for the only resources that they have used. No Initial investment to setup infra.

4. Self Service by using Automated Provisioning

I believe, this is one key point where cloud computing affecting the existing IT infrastructure job roles.

By using automated provisioning feature of Cloud Computing , organisations can request the services mentioned in Service Catalogue and could receive the services instantly and dynamically with minimum or no technology skills.

5. Resource availability from anywhere of the world

Public clouds can be accessed from anywhere of the world using the internet, and this feature makes cloud computing as beautiful solution for many startup companies which are running using virtual teams located in different parts of world.

for more inforamtoin, you can refer my other post ” Cloud Computing – It’s not just another buzzword, but a near future “, which talks about cloud computing features and benefits.





Cloud Computing Layers



IaaS - Infrastructure as a Service

Iaas is basically a paradigm shift from “Infrastructure as an asset” to “Infrastructure as a Service”

Key Characteristics of Iaas:
Infrastructure is Platform independent
Infrastructure costs are shared by multiple clients/users
Utility Pricing – Clients will pay only for the resources they have consumed

Advantages:
Minimal or No Capital investment on Infrastructure Hardware
No Maintenance costs for Hardware
Reduced ROI risk
Avoid the wastage of Computing resources
Dynamic in nature
Rapid Scalability of Infrastructure to meet sudden peak in business requirements

Drawbacks:
Performance of Infrastructure purely depends on Vendor capability to manage resources
Consistent high usage of resources for a long term could lead to higher costs
Companies have to introduce new layer of Enterprise security to deal with the cloud computing related to security issues

Note: It is better not to adapt Iaas Solution, if the oraganisation capital budget is greater than the Operating budget

PaaS – Platform as a Service

Paas is a Paradigm shift from ” purchasing platform environment tools as a licensing product ” to “purchasing as a service”.

Key Characteristics:
Deployment purely based on cloud infrastructure
caters to agile project management methods

Advantages:
It is possible capture the complex testing & development platform requirement and automate the tasks for provisioning of consistent environment.

Drawback:
Enterprises have to introduce new layer of security to deal with the security in cloud computing environment.




SaaS – Software as a Service

SaaS is basically paradigm shift from treating “treating software as an asset of business/consumer” to “using software as a service achieve the business goals”

Advantages:
reduce Capital expenses required for the development and testing resources
Reduced ROI risk
Streamlines and Iterative updates of the software

Drawbacks:
Enterprises have to introduce new layer of security to deal with the security in cloud computing environment.


Layers of Cloud Computing





Cloud Computing Solutions for Enterprise



Public Cloud Solution for Enterprise

Public Cloud solution allows enterprise to adapt Iass, Pass and Saas services from a cloud computing service provide on the internet, and actual computing resources are available under control of Vendor.

Private Cloud Solution for Enterprise

Private Cloud Solution for Enterprise nothing but constructing cloud solution within the enterprise datacenter, to provide more security on physical resources. And the internal departments of the enterprise within the organisation can utilise and pay for cloud computing resources as if they are using public cloud resources.

Hybrid Cloud Solution for Enterprise

Hybrid cloud solution enables enterprise use both public cloud and private cloud resources same time depending on the criticality and importance of the business function.

Virtual Private Cloud Solution

Using Virtual Private Cloud Solution Companies can create their own private cloud environment with in the public cloud by using different network/firewall rules. And the purpose is to avoid external access to the enterprise resources.


Possible Cloud Computing Solutions for Enterprise


How Cloud Computing affects the Job roles in the Infrastructure Support Team



Depending on the Clod computing Solution that enterprise adapted, there will be direct and indirect effect on the various job roles with in the infrastructure support teams.

If you look at the Sysadmin role in general , the actual job role involves three major responsibilities:
Hardware administration
Operating System Builds
Operating System Administration
Network Services Administration

Once the organisation adapted the Cloud Computing solution ( IaaS / PaaS / SaaS ) , it no longer required to maintain the skillful technical people to deal with hardware related issues and OS Build operations but they still need resources to perform OS / Network administration and to customize cloud resources to meet the organisation requirements. And the same effect is true for the Network Support roles.

Cloud Computing solutions cannot replace every system administrator in the company but it will expect new level cloud computing related expertise instead of ” to be isolated hardware maintenance skills”. For sure, it’s a call for learning. And more importantly the sysadmin job roles specifically dealing with the “Hardware & OS builds” has to go away, in near future.

For any organisation, the current recruitment strategy for the SysAdmin Team is “No. of Sysadmins are directly proportional to the physical server foot print in the data center “. With IaaS adaption organisation’s server footprint will reduce drastically, and hence the no. of sysadmin positions.

As of now the Clouds were deployed to replace the Server infrastructure with windows / linux on X86 model, but not yet having solutions for Vendor Specific Server OS like Solaris on Sparc, IBM AIX and HP UX …etc. Considering the speed of evolution in cloud computing technologies, it may not take long time to provide solutions for all kinds of server infrastructure. From the other side, if the Organisation choose to migrate their applications to X86 model servers to receive the benefits of economic cloud computing then the change is more rapid.

Below pictures will give you an understanding how the roles are moving out of Infra Teams depending on the Cloud solution adapted by the organisation.
Job Role movement with IAAS Cloud Computing Solution






Job Role movement with PAAS Cloud Computing Solution






Job Role movement with PAAS Cloud Computing Solution




Final and one more story, i want to tell you, before closing this post.

As most of you already aware, India is an agricultural based society where people treat their land like “mother that feeds you everyday ” and cows like “part of family wealth”. A decade before, most of the families used to follow the traditional way of cultivation that requires more number people and long working hours . And this requirement for the human labor is the main source for the jobs , in villages, for longtime

With technology innovations in India, there were many new tools/machines had been introduced to the indian agricultural industry which in turn reduced the requirement for the human labor. During this technology change, many people back at villages worried about their livelihood for sometime. But, the worry didn’t last longtime because most of them quickly adapted the skills related to these new technologies like “regular maintenance of these new tools” , “using the tools for better productivity” and “finding new lands to cultivate using these new machines with low cost” etc., and started living better than earlier.

And I believe, same story applies for any other industry including IT. And whenever we notice an inevitable change in our way, it is always wise to understand and get ready to accept it, instead of worrying about and trying to resist it.



Note: All the opinions mentioned here are purely personal, please feel to drop your comments/inputs related to the title of this post.

Friday, 16 September 2011

Investigate a SCSI disk error from Linux


Investigate a SCSI disk error from Linux


Collect the following information when a platform reports SCSI disk errors:
The following files:
/var/log/messages*



The output from the following commands:
/sbin/fdisk -l
/bin/cat /proc/scsi/scsi
/bin/dmesg






Once you have collected the data, review the output for any similarities in the errors. If the error always reports the same target, then consider replacing the device itself.
If the errors change to different targets on the same bus, then further troubleshooting is required. In the following references, the id is the target number of the device.

Here are examples of the output you will see when the scsi disks have errors from Red Hat Linux.

Notice in this example, we can see that the channel is 0, id is 1 and lun is 0. Each line of the error refers to the same id. We can also see that the disk is getting errors on different sectors. In this case, id 1 is target 1. This would indicate that target 1 is getting the errors and is the suspect fru.

/var/log/messages

Dec 20 10:33:23 localhost kernel: SCSI disk error : host 0 channel 0 id 1 lun 0 return code = 27010000
Dec 20 10:33:23 localhost kernel: I/O error: dev 08:03, sector 0
Dec 20 10:33:23 localhost kernel: SCSI disk error : host 0 channel 0 id 1 lun 0 return code = 27010000
Dec 20 10:33:23 localhost kernel: I/O error: dev 08:03, sector 13631520
Dec 20 10:33:23 localhost kernel: EXT3-fs error (device sd(8,3)): ext3_get_inode_loc: unable to read inode block – inode=852064, block=1703940
Dec 20 10:33:23 localhost kernel: SCSI disk error : host 0 channel 0 id 1 lun 0 return code = 27010000
Dec 20 10:33:23 localhost kernel: I/O error: dev 08:03, sector 0
Dec 20 10:33:23 localhost kernel: EXT3-fs error (device sd(8,3)) in ext3_reserve_inode_write: IO failure
Dec 20 10:33:24 localhost kernel: SCSI disk error : host 0 channel 0 id 1 lun 0 return code = 27010000
Dec 20 10:33:24 localhost kernel: I/O error: dev 08:03, sector 0
Dec 20 10:33:24 localhost kernel: SCSI disk error : host 0 channel 0 id 1 lun 0 return code = 27010000
Dec 20 10:33:24 localhost kernel: I/O error: dev 08:03, sector 13631520
Dec 20 10:33:24 localhost kernel: EXT3-fs error (device sd(8,3)): ext3_get_inode_loc: unable to read inode block – inode=852064, block=1703940
Dec 20 10:33:24 localhost kernel: SCSI disk error : host 0 channel 0 id 1 lun 0 return code = 27010000
Dec 20 10:33:24 localhost kernel: I/O error: dev 08:03, sector 0
Dec 20 10:33:24 localhost kernel: EXT3-fs error (device sd(8,3)) in ext3_reserve_inode_write: IO failure
Dec 20 10:33:24 localhost kernel: SCSI disk error : host 0 channel 0 id 1 lun 0 return code = 27010000
Dec 20 10:33:24 localhost kernel: I/O error: dev 08:03, sector 0

Notice in the following example We see that the errors also report the same channel 0, id 1, and lun 0 each time. We can also note scsi bus timeouts along with scsi check condition messages for the same device. In this case, target 1 is the suspect fru.

Another example of /var/log/messages

Sep 21 23:35:41 localhost kernel: klogd 1.4.1, log source = /proc/kmsg started.
Sep 21 23:35:41 localhost kernel: Inspecting /boot/System.map-2.4.18-17.7.x.4smp
Sep 21 23:35:41 localhost kernel: Loaded 17857 symbols from /boot/System.map-2.4.18-17.7.x.4smp.
Sep 21 23:35:41 localhost kernel: Symbols match kernel version 2.4.18.
Sep 21 23:35:41 localhost kernel: Loaded 256 symbols from 11 modules.
Sep 21 23:35:41 localhost kernel: SCSI disk error : host 0 channel 0 id 1 lun 0
return code = 27010000
Sep 21 23:35:41 localhost kernel: I/O error: dev 08:17, sector 66453508
Sep 21 23:35:41 localhost kernel: SCSI disk error : host 0 channel 0 id 1 lun 0
return code = 27010000
:
:
Sep 21 23:35:49 localhost kernel: scsi :”’ aborting command due to timeout : pid
43891492, scsi0, channel 0, id 1, lun 0 Write (10) 00 00 4b ae 5b 00 00 02 00”’
Sep 21 23:35:49 localhost kernel: mptscsih: OldAbort scheduling ABORT SCSI IO
(sc=c2db7200)
Sep 21 23:35:49 localhost kernel: IOs outstanding = 5
Sep 21 23:35:49 localhost kernel: scsi : aborting command due to timeout : pid
43891493, scsi0, channel 0, id 1, lun 0 Write (10) 00 00 43 2e 5d 00 00 02 00
:
:
Sep 21 23:35:49 localhost kernel: mptscsih: ioc0: Issue of TaskMgmt Successful!
Sep 21 23:35:49 localhost kernel: SCSI host 0 abort (pid 43891492) timed out – resetting
Sep 21 23:35:49 localhost kernel: SCSI bus is being reset for host 0 channel 0.
Sep 21 23:35:50 localhost kernel: mptscsih: OldReset scheduling BUS_RESET (sc=c2db7200)
Sep 21 23:35:50 localhost kernel: IOs outstanding = 6
Sep 21 23:35:50 localhost kernel: SCSI host 0 abort (pid 43891493) timed out – resetting
:
:
Sep 21 23:35:51 localhost kernel: SCSI host 0 reset (pid 43891492) timed out again -
Sep 21 23:35:51 localhost kernel: probably an unrecoverable SCSI bus or device hang.
Sep 21 23:35:51 localhost kernel: SCSI host 0 reset (pid 43891493) timed out again -
Sep 21 23:35:51 localhost kernel: SCSI Error Report =-=-= (0:0:0)
Sep 21 23:35:51 localhost kernel: SCSI_Status=02h (CHECK CONDITION)
Sep 21 23:35:51 localhost kernel: Original_CDB[]: 28 00 02 B1 4E 62 00 00 04 00
Sep 21 23:35:51 localhost kernel: SenseData[12h]: 70 00 06 00 00 00 00 0A 00 00 00 00 29 02 02 00 00 00
Sep 21 23:35:51 localhost kernel: SenseKey=6h (UNIT ATTENTION); FRU=02h
Sep 21 23:35:51 localhost kernel: ASC/ASCQ=29h/02h “SCSI BUS RESET OCCURRED”
Sep 21 23:35:51 localhost kernel: SCSI Error Report =-=-= (0:1:0)
Sep 21 23:35:51 localhost kernel: SCSI_Status=02h (CHECK CONDITION)
Sep 21 23:35:51 localhost kernel: Original_CDB[]: 2A 00 00 45 EE 5F 00 00 02 00
Sep 21 23:35:51 localhost kernel: SenseData[12h]: 70 00 06 00 00 00 00 0A 00 00 00 00 29 02 02 00 00 00
Sep 21 23:35:51 localhost kernel: SenseKey=6h (UNIT ATTENTION); FRU=02h
Sep 21 23:35:51 localhost kernel: ASC/ASCQ=29h/02h “SCSI BUS RESET OCCURRED”
Sep 21 23:35:51 localhost kernel: md3: no spare disk to reconstruct array! — continuing in degraded mode
Sep 21 23:35:51 localhost kernel: md: updating md2 RAID superblock on device
Sep 21 23:35:52 localhost kernel: md: (skipping faulty sdb5 )
Sep 21 23:35:52 localhost kernel: md: sda5 [events: 00000012]<6>(write) sda5′s sb offset: 4192832
Sep 21 23:35:52 localhost kernel: raid1: sda7: redirecting sector 30736424 to another mirror

In the following example, we see write errors on channel 0, id 0, lun 0 along with a medium error indicating an issue with the device itself. In this example, target 0 is the suspect part.

/var/log/messages

scsi0: ERROR on channel 0, id 0, lun 0, CDB: Write (10) 00 06 1d 3a 0d 00 00 08 00
Info fld=0x61d3a0d, Deferred sd08:02: sense key Medium Error
Additional sense indicates Write error
I/O error: dev 08:02, sector 102498376
SCSI Error: (0:0:0) Status=02h (CHECK CONDITION)
Key=3h (MEDIUM ERROR); FRU=0Ch
ASC/ASCQ=0Ch/02h “”
CDB: 2A 00 07 F9 3A 2D 00 00 08 00

scsi0: ERROR on channel 0, id 0, lun 0, CDB: Write (10) 00 07 f9 3a 2d 00 00 08 00
Info fld=0x8153a0d, Deferred sd08:02: sense key Medium Error
Additional sense indicates Write error – auto reallocation failed I/O error: dev 08:02, sector 133693544

Here is an example of the output you would see from cat /proc/scsi/scsi. This will give you a model number of the drive which could help with determining a part number and disk ID. Output will deviate depending on platform model and configuration:

Attached devices:
cat /proc/scsi/scsi

Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: SEAGATE Model: ST373307LC Rev: 0007
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi0 Channel: 00 Id: 01 Lun: 00
Vendor: SEAGATE Model: ST373307LC Rev: 0007
Type: Direct-Access

Here’s an example of what you might see with dmesg. In the example shown, we can see that there is an issue with channel 0, id 1 and lun 0. This example needs more data to determine the failure. However, one may suspect that the disk needs to be replaced as it shows an i/o error 2 times on the same disk.

Sector 3228343
scsi0 (1:0): rejecting I/O to offline device
RAID1 conf printout:
— wd:1 rd:2
disk 0, wo:0, o:1, dev:sda1
disk 1, wo:1, o:0, dev:sdb1
RAID1 conf printout:
— wd:1 rd:2
disk 0, wo:0, o:1, dev:sda1
scsi0 (1:0): rejecting I/O to offline device
md: write_disk_sb failed for device sdb2
md: errors occurred during superblock update, repeating scsi0 (1:0): rejecting I/O to offline device

RHEL 5 : Crash Dump capturing for Red Hat Linux


RHEL 5 : Crash Dump capturing for Red Hat Linux


There are numerous occasions when a crash dump can be a valuable source of information when troubleshooting a system. The most common times are a system hang or a system panic.

Under Solaris[TM] on both SPARC(R) and x86 platforms, the mechanisms for getting a crash dump in these situations are well understood. Under Linux (specifically Red Hat) this situation is less clear.

This post explains about hot to get a crash dump from Red Hat linux to aid in troubleshooting system hangs after the operating system has been loaded. It covers which versions of RHEL are required, and the differences between 32bit and 64bit support.

RHEL crash dump Utilities:

The two main options for getting a crash dump (all pages in memory dumped to a file) under RHEL are netdump and diskdump.


Netdump – Supplied in RHEL 3 U1 and later – If you are on update 1 please see RHSA-2004:017-06 from Red Hat, this will allow 64 bit os dump as well – This will dump a vmcore file containing the entire contents of memory, over the network to a dedicated netdump-server. It will also dump a thread list and register info over the network to a log file. Kernel oops information will be dumped as well.

This allows for a central netdump-server, that can receive dumps and logs from multiple systems on a network and multiple architectures. This machine can be provisioned with large amounts of disk space and allows for central maintinance.

Secuity between the client and server is catered for.

There is a bug regarding netdump working across subnets (https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=90803). Currently the server and client need to be on the same subnet. Scheduled for fix in RHEL3 U5 and RHEL4 U1.

Netdump-server is on cd1 of RHEL3 and will need to be installed manually (rpm install)

Netdump client and the kernel modules are installed by default.







Diskdump – Supplied in RHEL3 U3 and later – This is more familiar to Solaris users, and is closer to the savecore facility in solaris. A dedicated partition is formatted to receive disk dumps. When the system panics, it will write a memory image to this partition. When the system comes back up, this partition will be checked and if it contains a valid dump image, this will be written back out to /var/crash (or another location) on the system. After this has completed, the dump partiton will be reformatted (which can take a while) ready to take another crash dump.

Diskdump is supplied with RHEL3 U3 or later on both 32bit and 64bit

Both the above methods supply a vmcore image, and a textual stack dump. They do not provide a namelist or symbol table. To analyze the resultant dump image, a kernel needs to be built with debug flags set, that is matched to the kernel the customer will be running. As most RHEL installs will use the default kernel, this isn’t as tricky as might be expected.

The contents of /boot on the customer system should be tar’d up, as it can contain useful system maps for assistance in performing a Red Hat Linux crash dump.

The crash analysis tool provided with Red Hat Linux ‘crash’ contains info in the manual page about what it requires. It can be run against a live kernel image as well.

Forcing Crash Dumps from Hung Linux systems

Setting up RHEL for crash dumps

The main method for forcing a crash dump from a hung Linux system is using the alt-sysrq-<key> combination. This is analogus to STOP-A (or L1-A) on a Sun SPARC system.

echo ?h? > /proc/sysrqtrigger will have the same effect as pressing alt-sysrq-h.

Enabling alt-sysrq key sequence

The alt-sysrq key sequence is disabled by default under RHEL. To enable it, edit /etc/sysctl.conf and set kernel.sysrq = 1

Netdump configuration

http://www.redhat.com/support/wpapers/redhat/netdump/

Netdump only works on i386, not x86_64 the netconsole.o kernel module is not supplied for x86_64. Even if you roll your own kernel, it will load, but not dump the memory image over the network in 64bit mode.

The server and client need to be on the same subnet.

On the server
chkconfig netdump-server on
service netdump-server start
create user netdump with password

On the client
Edit the file /etc/sysconfig/netdump and add a line like NETDUMPADDR=10.0.0.1
make sure the DEV= line reflects the ethernet adaptor that the server is accessible on (e.g. DEV=eth1)
chkconfig netdump on
service netdump propagate (will require netdump user/password on server)
service netdump start (make sure the module loads ok)

IF the server changes IP address or mac address, then all netdump client modules will need to be unloaded and reloaded.




netdump example outputCPU#0 is frozen. CPU#1 is executing netdump. CPU#2 is frozen. CPU#3 is frozen. < netdump activated - performing handshake with the client. > NETDUMP START! < handshake completed - listening for dump requests. > 0(79500)/


Diskdump configuration

Disk dump requires RHEL3 Update 3 or later. It works under 32bit and 64bit modes.

Create a new partition using fdisk.
NOTE: It MUST be bigger than the amount of physical memory in the system.

Swap partitions cannot be used for dump devices.

Format the newly created dump partition (a reboot may be required to reread the partition tables on the disk) with diskdumpfmt -f -p <device> (e.g. diskdumpfmt -f /dev/sdb2)

chkconfig diskdump on
service diskdump start

disk dump example outputCPU frozen: #0#1 CPU#1 is executing diskdump. start dumping
dumping memory...


and on the way back upINIT: Entering runlevel: 3 Entering non-interactive startup Saving panic dump: [ OK ] Formatting dump device: [ OK ] Starting diskdump: [ OK ]


RHEL : Examining Red Hat Linux kernel state using Sysrq key combinations


RHEL : Examining Red Hat Linux kernel state using Sysrq key combinations


The internal state of a kernel based on Unix can provide valuable information on current system state. If a user process, or the kernel, is hanging, then the more information that can be gathered at that point, the greater the chance of a good diagnosis.

Under Solaris ON THE SPARC platform there are well known mechanisms for gathering stack traces, processor states and memory states. Under Linux, this can appear to be more of a black art.

This document sets out to document the information that can be captured, hopefully as early as possible, to improve the chances of a good diagnosis.




Comparisons with Sun SPARC systems.

For a Sun system, the Stop-A key sequence (or send break from a serial console) will drop a system to the ok prompt. From this point, crash dumps can be forced, or register/cpu states can be examined.

Under Linux, this ability is integrated in the kernel, and triggered using alt-sysrq key sequences.

Enabling Sysrq.

The sysrq feature needs to be enabled before it can be used. It is disabled by default on RHEL 3 and 4.

To enable the feature, edit /etc/sysctl.conf and set the value below to equal 1# Controls the System Request debugging functionality of the kernel kernel.sysrq = 1


Forcing sysrq

On X4200/X4100 Servers, once connected to the SP console (start /SP/console from ILOM prompt), and then press Esc followed by shift+b to send break, and then press the key corresponding to the sysrq-command to send.

On V65x Servers, send a break to the console, and then press the key corresponding to the sysrq-command to send.

On V20z and V40z, once connected via the platform console, press ^Ecl0<letter> to send the sysrq-command.

On Blades (B100x, B200x) send a break from the SC console, then press the letter corresponding to the sysrq-command in the serial console session to the blade.

This letter keystroke needs to be performed within 5 seconds of the break being sent. A ? character will print the menu of available options.

List of current (Linux-2.4.21) valid key presses

SysRq : HELP : loglevel0-8 reBoot Crash tErm kIll saK showMem Off showPc unRaw Sync showTasks Unmount shoWcpus

Note: Although the above menu displays characters in upper-case as the key to selection and are shown below in square brackets. They should be entered as lower-case to the ‘sysrq’ command, as it does not accept upper-case characters and will display something similar to the above menu above if upper-case is sent.

The correct keypress is in the square brackets

reBoot ? [B] ? This will reboot the system

Crash ? [C] ? This will force panic the system, by defererencing a pointer then reading from that address.

If diskdump or netdump are configured (see Technical Instruction 210668) then a crash dump can be forced.


va64-v20zc-gmp02 login: [halt sent] SysRq : Crashing the kernel by request Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000 printing rip: ffffffff801f66b0 PML4 8a1c7067 PGD 89f8e067 PMD 0 Oops: 0002 CPU 0 Pid: 0, comm: swapper Not tainted RIP: 0010:[<ffffffff801f66b0>]{sysrq_handle_crash+0} RSP: 0018:ffffffff805e6280 EFLAGS: 00010292 RAX: 000000000000001f RBX: ffffffff80445cd0 RCX: 0000000000000000 RDX: 0000000000000000 RSI: ffffffff80619f18 RDI: 0000000000000063 RBP: 0000000000000000 R08: 000000000000000d R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000001 R12: 0000000000000063 R13: 0000000000000000 R14: ffffffff80619f18 R15: 0000000000000006 FS: 0000002a969654c0(0000) GS:ffffffff805e1440(0000) knlGS:0000000000000000 CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b CR2: 0000000000000000 CR3: 0000000000101000 CR4: 00000000000006e0
Call Trace: [<ffffffff801f6d12>]{__handle_sysrq_nolock+146} [<ffffffff801f6c48>]{handle_sysrq+72} [<ffffffff801eedd5>]{receive_chars+485} [<ffffffff801ef2b6>]{rs_interrupt_single+150} [<ffffffff8011317f>]{handle_IRQ_event+95} [<ffffffff80113422>]{do_IRQ+274} [<ffffffff8010de20>]{default_idle+0} [<ffffffff8010de20>]{default_idle+0} [<ffffffff80110807>]{common_interrupt+95} <EOI> [<ffffffff8011fb45>]{thread_return+0} [<ffffffff8010de3e>]{default_idle+30} [<ffffffff8010de20>]{default_idle+0} [<ffffffff8010dec9>]{cpu_idle+73}
<SNIP>
CPU frozen: #0#1 CPU#0 is executing diskdump. start dumping


tErm – [E] – Send Term (sig 15) to all processes except init

kIll – [I] – Send Kill (sig 9) to all processes except init

saK – [K] – Kill all processes on currently active virtual console. Should give a login prompt, that is secure (e.g. not a user process trying to look like a login prompt).

ShowMem ? [M] – This will dump the following information ? the system will continue running.


SysRq : Show Memory
Mem-info: Zone:DMA freepages: 0 min: 0 low: 0 high: 0 Zone:Normal freepages:358380 min: 1246 low: 8923 high: 12889 Zone:HighMem freepages: 0 min: 0 low: 0 high: 0 Zone:DMA freepages: 2529 min: 0 low: 0 high: 0 Zone:Normal freepages:382475 min: 1278 low: 9149 high: 13212 Zone:HighMem freepages: 0 min: 0 low: 0 high: 0 Free pages: 743384 ( 0 HighMem) ( Active: 28480/8679, inactive_laundry: 2665, inactive_clean: 0, free: 743384 ) aa:0 ac:0 id:0 il:0 ic:0 fr:0 aa:676 ac:12917 id:7391 il:2262 ic:0 fr:358381 aa:0 ac:0 id:0 il:0 ic:0 fr:0 aa:0 ac:0 id:0 il:0 ic:0 fr:2529 aa:1446 ac:13441 id:1288 il:403 ic:0 fr:382475 aa:0 ac:0 id:0 il:0 ic:0 fr:0 17981*4kB 51522*8kB 28603*16kB 10636*32kB 2040*64kB 123*128kB 2*256kB 1*512kB 0*1024kB 0*2048kB 1*4096kB = 1433524kB) Swap cache: add 0, delete 0, find 0/0, race 0+0 210925 pages of slabcache 82 pages of kernel stacks 123 lowmem pagetables, 115 highmem pagetables Free swap: 2040244kB 1032047 pages of RAM 746589 free pages 33834 reserved pages 27394 pages shared 0 pages swap cached Buffer memory: 74448kB Cache memory: 76640kB CLEAN: 3301 buffers, 13183 kbyte, 67 used (last=3301), 0 locked, 0 dirty 0 delay
Red Hat Enterprise Linux AS release 3 (Taroon Update 4) Kernel 2.4.21-27.ELsmp on an x86_64


Off – [O] – Turn the system off (if supported by hardware)

showPc ? [P] (example from i386 Xeon) – shows register state (program counter)SysRq : Show Regs
Pid/TGid: 0/0, comm: swapper EIP: 0060:[<c0109129>] CPU: 3 EIP is at default_idle [kernel] 0x29 (2.4.21-27.ELsmp) ESP: 080b:c01091c2 EFLAGS: 00000246 Not tainted EAX: 00000000 EBX: c0109100 ECX: c043c680 EDX: c4956000 ESI: c4956000 EDI: c4956000 EBP: c0109100 DS: 0068 ES: 0068 FS: 0000 GS: 0000 CR0: 8005003b CR2: b75f7000 CR3: 062e1f40 CR4: 000006f0 Call Trace: [<c01091c2>] cpu_idle [kernel] 0x42 (0xc4957fb0) [<c01295e3>] printk [kernel] 0x153 (0xc4957fcc)


showTasks ? [T] – shows all tasks running with stack traces

SysRq : Show Statefree sibling task PC stack pid father child younger older init S 00000002 2604 1 0 6 2 (NOTLB) Call Trace: [<c0123f14>] schedule [kernel] 0x2f4 (0xc61f1ea0) [<c0134f65>] schedule_timeout [kernel] 0x65 (0xc61f1ee4) [<c015910c>] __get_free_pages [kernel] 0x1c (0xc61f1eec) [<c0179071>] __pollwait [kernel] 0x31 (0xc61f1ef0) [<c0134ef0>] process_timeout [kernel] 0x0 (0xc61f1f04) [<c017933b>] do_select [kernel] 0x13b (0xc61f1f1c) [<c01797de>] sys_select [kernel] 0x34e (0xc61f1f60)
migration/0 S 00000000 5500 2 0 3 1 (L-TLB) Call Trace: [<c0123f14>] schedule [kernel] 0x2f4 (0xc4955f68) [<c01258f0>] migration_task [kernel] 0x0 (0xc4955f9c) [<c0125bfb>] migration_task [kernel] 0x30b (0xc4955fac) [<c01258f0>] migration_task [kernel] 0x0 (0xc4955fc4) [<c01258f0>] migration_task [kernel] 0x0 (0xc4955fe0) [<c01095ad>] kernel_thread_helper [kernel] 0x5 (0xc4955ff0)


<SNIP>

Contains full stack for every process on the system, and lists what each cpu is running

unRaw – [R] – Forces raw terminal mode

Sync – [S] – syncs all mounted file systems, flushes all pending writes

Unmount – [U] – Syncs, unmounts and then remounts all filesystems as read only.

shoWcpus ? [W] (example from dual proc, HT enabled Xeon)SysRq : Show CPUs CPU2: c63f5e74 00000002 c01cea1f 00000000 c03b2d34 00000077 00000006 c01cecaa 00000077 c63f5f7c 00000000 00000000 00000000 00000000 c63f5f7c c01cec0d 00000077 c63f5f7c 00000000 00000000 f66d6000 c03ad438 c63f5f1c f7ee1d80 Call Trace: [<c01cea1f>] sysrq_handle_showcpus [kernel] 0xf (0xc63f5e7c) [<c01cecaa>] __handle_sysrq_nolock [kernel] 0x7a (0xc63f5e90) [<c01cec0d>] handle_sysrq [kernel] 0x5d (0xc63f5eb0) [<c01c5f06>] receive_chars [kernel] 0x1d6 (0xc63f5ed4) [<c0134933>] update_process_time_intertick [kernel] 0x53 (0xc63f5ef0) [<c01c64ca>] rs_interrupt_single [kernel] 0x12a (0xc63f5f04) [<c010dd39>] handle_IRQ_event [kernel] 0x69 (0xc63f5f30) [<c010df79>] do_IRQ [kernel] 0xb9 (0xc63f5f50) [<c010dec0>] do_IRQ [kernel] 0x0 (0xc63f5f74) [<c0109100>] default_idle [kernel] 0x0 (0xc63f5f7c) [<c0109100>] default_idle [kernel] 0x0 (0xc63f5f90) [<c0109129>] default_idle [kernel] 0x29 (0xc63f5fa4) [<c01091c2>] cpu_idle [kernel] 0x42 (0xc63f5fb0) [<c01295e3>] printk [kernel] 0x153 (0xc63f5fcc)
CPU3: c4957f64 00000003 c011c91f 00000000 00001f7c c03f2caa c0109100 00000000 c4956000 c4956000 c4956000 c0109100 00000000 00000068 00000068 fffffffb c0109129 00000060 00000246 c01091c2 0702080b 00000000 00000000 00000000 Call Trace: [<c011c91f>] smp_call_function_interrupt [kernel] 0x2f (0xc4957f6c) [<c0109100>] default_idle [kernel] 0x0 (0xc4957f7c) [<c0109100>] default_idle [kernel] 0x0 (0xc4957f90) [<c0109129>] default_idle [kernel] 0x29 (0xc4957fa4) [<c01091c2>] cpu_idle [kernel] 0x42 (0xc4957fb0) [<c01295e3>] printk [kernel] 0x153 (0xc4957fcc)
CPU0: c03f1f88 00000000 c011c91f 00000000 00001fa0 c03f2caa c0109100 c043b280 c03f0000 c03f0000 c03f0000 c0109100 00000000 00000068 00000068 fffffffb c0109129 00000060 00000246 c01091c2 0002080b 00099800 c0107000 0008e000 Call Trace: [<c011c91f>] smp_call_function_interrupt [kernel] 0x2f (0xc03f1f90) [<c0109100>] default_idle [kernel] 0x0 (0xc03f1fa0) [<c0109100>] default_idle [kernel] 0x0 (0xc03f1fb4) [<c0109129>] default_idle [kernel] 0x29 (0xc03f1fc8) [<c01091c2>] cpu_idle [kernel] 0x42 (0xc03f1fd4) [<c0107000>] stext [kernel] 0x0 (0xc03f1fe0)
CPU1: c63f7f64 00000001 c011c91f 00000000 00001f7c c03f2caa c0109100 c043b280 c63f6000 c63f6000 c63f6000 c0109100 00000000 00000068 00000068 fffffffb c0109129 00000060 00000246 c01091c2 0102080b 00000000 00000000 00000000 Call Trace: [<c011c91f>] smp_call_function_interrupt [kernel] 0x2f (0xc63f7f6c) [<c0109100>] default_idle [kernel] 0x0 (0xc63f7f7c) [<c0109100>] default_idle [kernel] 0x0 (0xc63f7f90) [<c0109129>] default_idle [kernel] 0x29 (0xc63f7fa4) [<c01091c2>] cpu_idle [kernel] 0x42 (0xc63f7fb0) [<c01292b3>] call_console_drivers [kernel] 0x63 (0xc63f7fc4) [<c01295e3>] printk [kernel] 0x153 (0xc63f7ffc)


Investigate a SCSI disk error from Linux


Investigate a SCSI disk error from Linux



Collect the following information when a platform reports SCSI disk errors:
The following files:
/var/log/messages*



The output from the following commands:
/sbin/fdisk -l
/bin/cat /proc/scsi/scsi
/bin/dmesg






Once you have collected the data, review the output for any similarities in the errors. If the error always reports the same target, then consider replacing the device itself.
If the errors change to different targets on the same bus, then further troubleshooting is required. In the following references, the id is the target number of the device.

Here are examples of the output you will see when the scsi disks have errors from Red Hat Linux.

Notice in this example, we can see that the channel is 0, id is 1 and lun is 0. Each line of the error refers to the same id. We can also see that the disk is getting errors on different sectors. In this case, id 1 is target 1. This would indicate that target 1 is getting the errors and is the suspect fru.

/var/log/messages

Dec 20 10:33:23 localhost kernel: SCSI disk error : host 0 channel 0 id 1 lun 0 return code = 27010000
Dec 20 10:33:23 localhost kernel: I/O error: dev 08:03, sector 0
Dec 20 10:33:23 localhost kernel: SCSI disk error : host 0 channel 0 id 1 lun 0 return code = 27010000
Dec 20 10:33:23 localhost kernel: I/O error: dev 08:03, sector 13631520
Dec 20 10:33:23 localhost kernel: EXT3-fs error (device sd(8,3)): ext3_get_inode_loc: unable to read inode block – inode=852064, block=1703940
Dec 20 10:33:23 localhost kernel: SCSI disk error : host 0 channel 0 id 1 lun 0 return code = 27010000
Dec 20 10:33:23 localhost kernel: I/O error: dev 08:03, sector 0
Dec 20 10:33:23 localhost kernel: EXT3-fs error (device sd(8,3)) in ext3_reserve_inode_write: IO failure
Dec 20 10:33:24 localhost kernel: SCSI disk error : host 0 channel 0 id 1 lun 0 return code = 27010000
Dec 20 10:33:24 localhost kernel: I/O error: dev 08:03, sector 0
Dec 20 10:33:24 localhost kernel: SCSI disk error : host 0 channel 0 id 1 lun 0 return code = 27010000
Dec 20 10:33:24 localhost kernel: I/O error: dev 08:03, sector 13631520
Dec 20 10:33:24 localhost kernel: EXT3-fs error (device sd(8,3)): ext3_get_inode_loc: unable to read inode block – inode=852064, block=1703940
Dec 20 10:33:24 localhost kernel: SCSI disk error : host 0 channel 0 id 1 lun 0 return code = 27010000
Dec 20 10:33:24 localhost kernel: I/O error: dev 08:03, sector 0
Dec 20 10:33:24 localhost kernel: EXT3-fs error (device sd(8,3)) in ext3_reserve_inode_write: IO failure
Dec 20 10:33:24 localhost kernel: SCSI disk error : host 0 channel 0 id 1 lun 0 return code = 27010000
Dec 20 10:33:24 localhost kernel: I/O error: dev 08:03, sector 0

Notice in the following example We see that the errors also report the same channel 0, id 1, and lun 0 each time. We can also note scsi bus timeouts along with scsi check condition messages for the same device. In this case, target 1 is the suspect fru.

Another example of /var/log/messages

Sep 21 23:35:41 localhost kernel: klogd 1.4.1, log source = /proc/kmsg started.
Sep 21 23:35:41 localhost kernel: Inspecting /boot/System.map-2.4.18-17.7.x.4smp
Sep 21 23:35:41 localhost kernel: Loaded 17857 symbols from /boot/System.map-2.4.18-17.7.x.4smp.
Sep 21 23:35:41 localhost kernel: Symbols match kernel version 2.4.18.
Sep 21 23:35:41 localhost kernel: Loaded 256 symbols from 11 modules.
Sep 21 23:35:41 localhost kernel: SCSI disk error : host 0 channel 0 id 1 lun 0
return code = 27010000
Sep 21 23:35:41 localhost kernel: I/O error: dev 08:17, sector 66453508
Sep 21 23:35:41 localhost kernel: SCSI disk error : host 0 channel 0 id 1 lun 0
return code = 27010000
:
:
Sep 21 23:35:49 localhost kernel: scsi :”’ aborting command due to timeout : pid
43891492, scsi0, channel 0, id 1, lun 0 Write (10) 00 00 4b ae 5b 00 00 02 00”’
Sep 21 23:35:49 localhost kernel: mptscsih: OldAbort scheduling ABORT SCSI IO
(sc=c2db7200)
Sep 21 23:35:49 localhost kernel: IOs outstanding = 5
Sep 21 23:35:49 localhost kernel: scsi : aborting command due to timeout : pid
43891493, scsi0, channel 0, id 1, lun 0 Write (10) 00 00 43 2e 5d 00 00 02 00
:
:
Sep 21 23:35:49 localhost kernel: mptscsih: ioc0: Issue of TaskMgmt Successful!
Sep 21 23:35:49 localhost kernel: SCSI host 0 abort (pid 43891492) timed out – resetting
Sep 21 23:35:49 localhost kernel: SCSI bus is being reset for host 0 channel 0.
Sep 21 23:35:50 localhost kernel: mptscsih: OldReset scheduling BUS_RESET (sc=c2db7200)
Sep 21 23:35:50 localhost kernel: IOs outstanding = 6
Sep 21 23:35:50 localhost kernel: SCSI host 0 abort (pid 43891493) timed out – resetting
:
:
Sep 21 23:35:51 localhost kernel: SCSI host 0 reset (pid 43891492) timed out again -
Sep 21 23:35:51 localhost kernel: probably an unrecoverable SCSI bus or device hang.
Sep 21 23:35:51 localhost kernel: SCSI host 0 reset (pid 43891493) timed out again -
Sep 21 23:35:51 localhost kernel: SCSI Error Report =-=-= (0:0:0)
Sep 21 23:35:51 localhost kernel: SCSI_Status=02h (CHECK CONDITION)
Sep 21 23:35:51 localhost kernel: Original_CDB[]: 28 00 02 B1 4E 62 00 00 04 00
Sep 21 23:35:51 localhost kernel: SenseData[12h]: 70 00 06 00 00 00 00 0A 00 00 00 00 29 02 02 00 00 00
Sep 21 23:35:51 localhost kernel: SenseKey=6h (UNIT ATTENTION); FRU=02h
Sep 21 23:35:51 localhost kernel: ASC/ASCQ=29h/02h “SCSI BUS RESET OCCURRED”
Sep 21 23:35:51 localhost kernel: SCSI Error Report =-=-= (0:1:0)
Sep 21 23:35:51 localhost kernel: SCSI_Status=02h (CHECK CONDITION)
Sep 21 23:35:51 localhost kernel: Original_CDB[]: 2A 00 00 45 EE 5F 00 00 02 00
Sep 21 23:35:51 localhost kernel: SenseData[12h]: 70 00 06 00 00 00 00 0A 00 00 00 00 29 02 02 00 00 00
Sep 21 23:35:51 localhost kernel: SenseKey=6h (UNIT ATTENTION); FRU=02h
Sep 21 23:35:51 localhost kernel: ASC/ASCQ=29h/02h “SCSI BUS RESET OCCURRED”
Sep 21 23:35:51 localhost kernel: md3: no spare disk to reconstruct array! — continuing in degraded mode
Sep 21 23:35:51 localhost kernel: md: updating md2 RAID superblock on device
Sep 21 23:35:52 localhost kernel: md: (skipping faulty sdb5 )
Sep 21 23:35:52 localhost kernel: md: sda5 [events: 00000012]<6>(write) sda5′s sb offset: 4192832
Sep 21 23:35:52 localhost kernel: raid1: sda7: redirecting sector 30736424 to another mirror

In the following example, we see write errors on channel 0, id 0, lun 0 along with a medium error indicating an issue with the device itself. In this example, target 0 is the suspect part.

/var/log/messages

scsi0: ERROR on channel 0, id 0, lun 0, CDB: Write (10) 00 06 1d 3a 0d 00 00 08 00
Info fld=0x61d3a0d, Deferred sd08:02: sense key Medium Error
Additional sense indicates Write error
I/O error: dev 08:02, sector 102498376
SCSI Error: (0:0:0) Status=02h (CHECK CONDITION)
Key=3h (MEDIUM ERROR); FRU=0Ch
ASC/ASCQ=0Ch/02h “”
CDB: 2A 00 07 F9 3A 2D 00 00 08 00

scsi0: ERROR on channel 0, id 0, lun 0, CDB: Write (10) 00 07 f9 3a 2d 00 00 08 00
Info fld=0x8153a0d, Deferred sd08:02: sense key Medium Error
Additional sense indicates Write error – auto reallocation failed I/O error: dev 08:02, sector 133693544

Here is an example of the output you would see from cat /proc/scsi/scsi. This will give you a model number of the drive which could help with determining a part number and disk ID. Output will deviate depending on platform model and configuration:

Attached devices:
cat /proc/scsi/scsi

Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: SEAGATE Model: ST373307LC Rev: 0007
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi0 Channel: 00 Id: 01 Lun: 00
Vendor: SEAGATE Model: ST373307LC Rev: 0007
Type: Direct-Access

Here’s an example of what you might see with dmesg. In the example shown, we can see that there is an issue with channel 0, id 1 and lun 0. This example needs more data to determine the failure. However, one may suspect that the disk needs to be replaced as it shows an i/o error 2 times on the same disk.

Sector 3228343
scsi0 (1:0): rejecting I/O to offline device
RAID1 conf printout:
— wd:1 rd:2
disk 0, wo:0, o:1, dev:sda1
disk 1, wo:1, o:0, dev:sdb1
RAID1 conf printout:
— wd:1 rd:2
disk 0, wo:0, o:1, dev:sda1
scsi0 (1:0): rejecting I/O to offline device
md: write_disk_sb failed for device sdb2
md: errors occurred during superblock update, repeating scsi0 (1:0): rejecting I/O to offline device

RHEL 5 Linux : configure Kdump on Red Hat Enterprise Linux 5


RHEL 5 Linux : configure Kdump on Red Hat Enterprise Linux 5

Installing required packages

RHEL 5 has the Kdump packages installed by default. If for any reason they are not installed, you need to install the packages “kexec-tools-<version>.rpm” and “system-config-kdump-<version>.rpm” with the following commands:

# rpm -ivh kexec-tools-<version>.rpm system-config-kdump-<version>.rpm


or, if your system is registered at the Red Hat Network, by running# yum install kexec-tools system-config-kdump

Configuration of Kdump

First you need to enable Kdump. There is a configuration dialog available which can be started under a graphical environment by using:# system-config-kdump


Please check the option box “Enable kdump” at the top of the Dialog.

Nex,t you have to define the memory to reserve for Kdump In the dialog you see the memory information for your system and the usable memory for Kdump. On most systems a value of “128MB” Kdump memory should be enough.

Finally, you need to define a location where to store the dump file. You have the choice between “file”, “nfs”, “ssh”, “raw”, “ext2″, and “ext3″. This setup is straight forward, please configure the kdump as it fit’s best into your environment. The simplest configuration for the location is “file:///var/crash“.

You need to take care that you have enough disk space on the configured location, at least the physically memory of the system which is expected to dumped.

After you have configured kdump, you need to reboot the system to activate the settings.

More information about the configuration can be found in the file “/usr/share/doc/kexec-tools-*/kexec-kdump-howto.txt“

Checking the configuration

To make sure that the configuration is working, you can test by using the magic SysRq feature of the kernel.

WARNING: Please make sure that no other users are logged into the system and that all work is saved before following the next steps, otherwise this may lead to data loss.

First you need to enable it with the following command:# echo 1 > /proc/sys/kerne/sysrq


Next you should sync the data of your hard disks to minimize the risk of lost data by# echo s > /proc/sysrq-trigger


And finally you can force the system to “crash” by# echo c > /proc/sysrq-trigger


You should see some panic output and the system will restart into the kdump kernel to save the crash dump data. This will take some time depending on the amount of memory of your system and the speed of the device the dump is written to. After the dump is finished the system will reboot back to the normal service.

If you follow the example above you should now find the core file at “/var/crash/<YYYY-MM-DD-HH:MM>/vmcore” which indicating the the setup is working.

Linux Troubleshooting – Root Password Reset


Linux Troubleshooting – Root Password Reset


The General problem that we see in an enterprise environment, where there is no centralized automated password management tool is …. missing root passwords for the servers.

Missing root passwords are also common when the servers initially managed by one team and later handed over to other team, but not all the changes to root passwords are not handed over to new teams.




Below procedure, to reset root password, can be used on a linux machine if you are having access to server console:

1. Reboot the machine

2. When you notice GRUB loader that shows the Linux Operating system to be booted Just press the button “e”

3. Highlight the kernel line using the arrow keys and then hit “e” again

4. That will take you to the command interface where you can edit the line. you just have to go end of line add “init=/bin/bash” ( no need to enter double quotes )

5. And then hit the button “b” to boot from that kernel entry

This will dump you to a bash prompt much earlier than single user mode, and a lot less has been initialized, mounted, etc. And root filesystem is in “read only” state at this level.

To make any modifications related to password we should remount the ”/” filesystem in “rw” mode.

Just use the command:

# mount -o remount,rw /

take the backup copies of /etc/passwd and /etc/shadow before modifying them, and then make modifications to the “root” entry in /etc/shadow as below

# original line
root:$1$EYBTVZHP$QtjkCG768giXzPvW4HqB5/:12832:0:99999:7:::
# after editing

root::12832:0:99999:7::: –> we have removed the encrypted password field from the root entry, to make password empty.

And now reboot the machine to normal mode and once you login with empty password ….. Just dont forget to reset root password. Otherwise you know what happens your Server will turn into “Public toilet”

Redhat Linux : Setting Kernel Parameter


Redhat Linux : Setting Kernel Parameter


To modify kernel parameters a common way is to change /proc file system:
1. Log in as root user.
2. Change to the /proc/sys/kernel directory.
3. echo <desired list of values> > <group of parameters>

But this update is not permanent and after system reboot, your kernel parameters’s values will be the same as before. A way to set kernel parameter modifications permanently, on Linux, is to include them in a shell script. This could be run as root user, or in an automatic way at startup process
- Create file /etc/init.d/set_kernel_parameters


#!/bin/sh
#
#
echo -n quot;Start Setting kernel parameters on "
echo 250 32000 100 128 > /proc/sys/kernel/sem #This sets SEMMSL, SEMMNS, SEMOPM, SEMMNI
echo 2097152 > /proc/sys/kernel/shmall
echo 2147483648 > /proc/sys/kernel/shmmax
echo 4096 > /proc/sys/kernel/shmmni
#
echo 65536 > /proc/sys/fs/file-max
#
echo 1024 65000 > /proc/sys/net/ipv4/ip_local_port_range
#
echo 4194304 > /proc/sys/net/core/rmem_default
echo 4194304 > /proc/sys/net/core/rmem_max
echo 262144 > /proc/sys/net/core/wmem_default
echo 262144 > /proc/sys/net/core/wmem_max
#
ulimit -n 65536 >/dev/null 2>&1
ulimit -u 16384 >/dev/null 2>&1
#
echo -n quot;End Setting kernel parameters on "
echo

- grant execute rights on this file
$ chmod 755 /etc/init.d/set_kernel_parameters

- create symbolic link to run at startup
$ ln -s /etc/init.d/set_kernel_parameters /etc/rc.d/rc5.d/S55kernel
$ ln -s /etc/init.d/set_kernel_parameters /etc/rc.d/rc3.d/S55kernel


- make the kernel parameters active by running as root
$ /etc/init.d/set_kernel_parameters









Second Procedure to modify kernel parameters a common way is to change /proc file system:
1. Log in as root user.
2. Change to the /proc/sys/kernel directory.
3. echo <desired list of values> > <group of parameters>

But this update is not permanent and after system reboot, your kernel parameters’s values will be the same as before. A way to set kernel parameter modifications permanently, on Linux, is to include them in a shell script. This could be run as root user, or in an automatic way at startup process

- Create file /etc/init.d/set_kernel_parameters
#!/bin/sh
#
#
echo -n quot;Start Setting kernel parameters on "
echo 250 32000 100 128 > /proc/sys/kernel/sem #This sets SEMMSL, SEMMNS, SEMOPM, SEMMNI
echo 2097152 > /proc/sys/kernel/shmall
echo 2147483648 > /proc/sys/kernel/shmmax
echo 4096 > /proc/sys/kernel/shmmni
#
echo 65536 > /proc/sys/fs/file-max
#
echo 1024 65000 > /proc/sys/net/ipv4/ip_local_port_range
#
echo 4194304 > /proc/sys/net/core/rmem_default
echo 4194304 > /proc/sys/net/core/rmem_max
echo 262144 > /proc/sys/net/core/wmem_default
echo 262144 > /proc/sys/net/core/wmem_max
#
ulimit -n 65536 >/dev/null 2>&1
ulimit -u 16384 >/dev/null 2>&1
#
echo -n quot;End Setting kernel parameters on "
echo

- grant execute rights on this file
$ chmod 755 /etc/init.d/set_kernel_parameters

- create symbolic link to run at startup
$ ln -s /etc/init.d/set_kernel_parameters /etc/rc.d/rc5.d/S55kernel
$ ln -s /etc/init.d/set_kernel_parameters /etc/rc.d/rc3.d/S55kernel


- make the kernel parameters active by running as root
$ /etc/init.d/set_kernel_parameters



Every time the system boots, the ‘/etc/rc.d/rc.sysinit‘ script is executed by init process. This shell script contains a call to sysctl command and reads the values from /etc/sysctl.conf file as the ones to be set . Therefore, any values added to /etc/sysctl.confwill take effect after the system boot or without downtime using “sysctl -p” command



sysctl.conf is a simple file containing sysctl values to be read in and set by sysctl (see man 8 sysctl).

The syntax is simply as follows:
# comment
; comment



token = value

Note that blank lines are ignored, and whitespace before and after a token or value is ignored, although a value can contain whitespace within. Lines which begin with a # or ; are considered remarks / comments and ignored.

Example:
# sysctl.conf sample
#
kernel.sysrq = 1
kernel.sem = 250 32000 100 128 #This sets SEMMSL, SEMMNS, SEMOPM, SEMMNI
kernel.shmmax = 2147483648
kernel.shmall = 2097152
kernel.shmmni = 4096
;
fs.file-max = 65536
;
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.ip_local_port_range = 1024 65000
;
net.core.rmem_default = 4194304
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 262144


The sysctl command is used to view, set, and automated kernel settings in the /proc/sys/ directory. To get a quick overview of all settings configurable in the /proc/sys/ directory, type the sysctl -acommand as root

Linux : Quick Reference for Sendmail Issues


Linux : Quick Reference for Sendmail Issues



Just discussing some of the common sendmail issues and troubleshooting procedures in linux



Problem: Sendmail can not send mail to users in other domains.



Symptom:It can send mail to internal email account but can’t send mail to outside of the company or users in other domains.

Solution: To implement the solution, please execute the following steps:

1. Edit /etc/mail/sendmail.mc to have:

define(`SMART_HOST’,`<your full smtp server address>’)

and

DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA’)dnl

Do NOT edit /etc/mail/sendmail.cf as it may cause unexpected results.

(The “DAEMON_OPTIONS” line is a security measure – it allows sendmail to accept e-mail only from the local server. If you do not need otherwise, this is a good security practice. )

2. Regenerate sendmail.cf from sendmail.mc:

# m4 /etc/mail/sendmail.mc > /etc/mail/sendmail.cf

3. Restart the sendmail service:

# service sendmail restart

Linux : NTP Error Message "kernel time sync error 0001"


Linux : NTP Error Message "kernel time sync error 0001"


This error message is logged by NTP daemon, and this means that the ntpd fails to adjust OS internal clock for some reasons.

NTP daemon adjusts OS internal clock by invoking system call named adjtimex(). Linux kernel expects the system call to be invoked regularly, as NTP daemon does so normally. When the interval becomes longer than kernel expects, this error message is logged.

Therefore, this error message is logged when NTP daemon:
can not get accurate time information from network time server.
has not invoked adjtimex() for a long time, longer than kernel expects.

Actually, there are no bad effects to the system even if you see this message in system log. Because OS continues to tick the OS internal clock without ntpd adjustments. Also ntpd continues to work, even after ntpd fails to get accurate time from network time server.

Solution
You can just ignore this message, since this is not fatal error but notice level message. This message can not be suppressed.

It is better to check your network, or check the network connection between NTP daemon and network time server.

Linux – dynamically add/remove scsi from linux


Linux – dynamically add/remove scsi from linux


1. SCSI Device Addressing

A four-part addressing scheme is used to define the location of SCSI devices within a system. The attributes include:

<H>ost: Instance of hostadapter to which device is attached
<B>us: SCSI Bus or Channel on the hostadapter
<T>arget: SCSI Id assigned to an individual device
<L>un: Logical unit number on the device

Each attribute, <H> <B> <T> <L>, refers to a part of the device location, similar to how number, street, suburb and state all form an address.



References to device addresses are readily visible from system logs and various command output, though the availability of certain commands or utilities depends on the distribution and operating system version used. The lsscsi(8) utility, for example, is natively available on Enterprise Linux 5.

The following denotes the use of several commands to describe a relatively simple SCSI system.
# dmesg
...
SCSI subsystem initialized
libata version 2.21 loaded.
ata_piix 0000:00:07.1: version 2.12
scsi0 : ata_piix
scsi1 : ata_piix
ata1: PATA max UDMA/33 cmd 0x000101f0 ctl 0x000103f6 bmdma 0x0001ffa0 irq 14
ata2: PATA max UDMA/33 cmd 0x00010170 ctl 0x00010376 bmdma 0x0001ffa8 irq 15
ata1.00: ATA-5: IC35L040AVVA07-0, VA2OA51A, max UDMA/100
ata1.00: 78165360 sectors, multi 8: LBA
ata1.01: ATA-5: WDC WD400BB-32CLB0, 05.04E05, max UDMA/100
ata1.01: 78165360 sectors, multi 8: LBA
ata1.00: configured for UDMA/33
ata1.01: configured for UDMA/33
ata2.00: ATA-5: QUANTUM FIREBALLP AS60.0, A1Y.1500, max UDMA/100
ata2.00: 117266688 sectors, multi 8: LBA
ata2.01: ATAPI: JLMS DVD-ROM XJ-HD166, DD05, max UDMA/33
ata2.00: configured for UDMA/33
ata2.01: configured for UDMA/33
scsi 0:0:0:0: Direct-Access ATA IC35L040AVVA07-0 VA2O PQ: 0 ANSI: 5
sd 0:0:0:0: [sda] 78165360 512-byte hardware sectors (40021 MB)
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 0:0:0:0: [sda] 78165360 512-byte hardware sectors (40021 MB)
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sda: sda1 sda2 sda3
sd 0:0:0:0: [sda] Attached SCSI disk
scsi 0:0:1:0: Direct-Access ATA WDC WD400BB-32CL 05.0 PQ: 0 ANSI: 5
sd 0:0:1:0: [sdb] 78165360 512-byte hardware sectors (40021 MB)
sd 0:0:1:0: [sdb] Write Protect is off
sd 0:0:1:0: [sdb] Mode Sense: 00 3a 00 00
sd 0:0:1:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 0:0:1:0: [sdb] 78165360 512-byte hardware sectors (40021 MB)
sd 0:0:1:0: [sdb] Write Protect is off
sd 0:0:1:0: [sdb] Mode Sense: 00 3a 00 00
sd 0:0:1:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sdb: sdb1 sdb2
sd 0:0:1:0: [sdb] Attached SCSI disk
scsi 1:0:0:0: Direct-Access ATA QUANTUM FIREBALL A1Y. PQ: 0 ANSI: 5
sd 1:0:0:0: [sdc] 117266688 512-byte hardware sectors (60041 MB)
sd 1:0:0:0: [sdc] Write Protect is off
sd 1:0:0:0: [sdc] Mode Sense: 00 3a 00 00
sd 1:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 1:0:0:0: [sdc] 117266688 512-byte hardware sectors (60041 MB)
sd 1:0:0:0: [sdc] Write Protect is off
sd 1:0:0:0: [sdc] Mode Sense: 00 3a 00 00
sd 1:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sdc: sdc1
sd 1:0:0:0: [sdc] Attached SCSI disk
scsi 1:0:1:0: CD-ROM JLMS DVD-ROM XJ-HD166 DD05 PQ: 0 ANSI: 5
...



# lspci | grep -i ide
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)

# lsscsi
[0:0:0:0] disk ATA IC35L040AVVA07-0 VA2O /dev/sda
[0:0:1:0] disk ATA WDC WD400BB-32CL 05.0 /dev/sdb
[1:0:0:0] disk ATA QUANTUM FIREBALL A1Y. /dev/sdc
[1:0:1:0] cd/dvd JLMS DVD-ROM XJ-HD166 DD05 /dev/sr0

# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: IC35L040AVVA07-0 Rev: VA2O
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi0 Channel: 00 Id: 01 Lun: 00
Vendor: ATA Model: WDC WD400BB-32CL Rev: 05.0
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: QUANTUM FIREBALL Rev: A1Y.
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi1 Channel: 00 Id: 01 Lun: 00
Vendor: JLMS Model: DVD-ROM XJ-HD166 Rev: DD05
Type: CD-ROM ANSI SCSI revision: 05

# grep host /etc/modprobe.conf
alias scsi_hostadapter ata_piix

# ls -ld /sys/class/scsi_host/host*/
drwxr-xr-x 2 root root 0 2008-08-06 17:25 /sys/class/scsi_host/host0/
drwxr-xr-x 2 root root 0 2008-08-06 17:25 /sys/class/scsi_host/host1/

# cat /sys/class/scsi_host/host[0-1]/proc_name
ata_piix
ata_piix

Above, it’s evident that four devices are attached to two hostadapters, both of type ata_piix i.e. host0: sda sdb, host1: sdc sr0.

Note too, there is sufficient overlap between various command output that device addresses and naming can be easily identified and mapped.
2. SCSI Device Naming

The name assigned to a SCSI device is completely independent to it’s SCSI address. In fact, taking the linux 2.6 kernel as an example, the device naming system used, udev(8), dynamically allocates device names upon each boot.

During system initialisation, hardware is scanned and devices are named according to their discovery order. This means, however, that the same device may not always be assigned the same name. This may have implications on some systems, especially those that solely rely on device names such as file /etc/fstab for mounting filesystems.


For such cases, explicit udev(8) configuration may be required to guarantee cross-reboot persistency of device naming. Alternatively, other methods, such as mount-by-label (where supported), may be employed to ensure that only intended devices are selected and used, regardless of their arbitrary device name. udev(8), though mentioned here, is not described in any detail – it’s relevance, however, becomes more apparent later. Refer to the udev(8) and mount(8) man pages and references below for more information.
3. Scanning, Adding and Removing Devices

From time to time, it may be necessary to add, remove, replace or even reorganise SCSI devices in a system. Broadly speaking, there are two main approaches to how this can be achieved – offline and online.

The method one might choose to add or remove devices usually depends on several factors, such as:
distribution, operating system and version
hardware type, driver version/capability
system availability requirements
storage availability requirements
availability of backup/redundant systems
familiarity of system, I/O stack and storage
acceptance of associated risk

Following are several methods of device addition and removal. Regardless of which method you use, always ensure to perform thorough testing before use within a production environment.
3.1 System reboot

Adding and removing devices when a system is shutdown is considered the simplest and safest method. Clean filesystem unmount and ordered shutdown of all of components/drivers involved in the I/O path avoids the potential risks associated with online (dynamic) device removal. On boot, ordered driver load allows newly added or removed devices to be correctly discovered and identified. This method, obviously, necessitates total system unavailability.
3.2 Reload hostadapter driver

The installation (loading) of common host bus adapter (HBA) driver modules initiates a scan (or rescan) of the associated device, resulting in the (re)discovery of any newly added/removed devices. Whilst performed with the system online, to be able to reload the hostadapter module, it must first be unloaded. This, therefore, means that any (all) filesystem, volume or array on devices associated with the driver must first be offlined (in the case of arrays or logical volumes) and/or unmounted. In fact, depending on the complexity of the I/O stack involved, this method is likely to also require the shutdown of related storage services and the unloading of other related or dependent modules e.g. multipathing.
3.3 procfs /proc/scsi/scsi (2.4 kernel)

Linux provides the ability to dynamically interact with the running kernel via the /proc (procfs) filesystem. Dynamically adding or removing devices can be accomplished via the /proc/scsi/scsi interface i.e.:

To remove a specific device:
# echo "scsi remove-single-device <H> <B> <T> <L>" > /proc/scsi/scsi

where <H> <B> <T> <L> refers to Host, Bus, Target and Lun

To add a specific device:
# echo "scsi add-single-device <H> <B> <T> <L>" > /proc/scsi/scsi

where <H> <B> <T> <L> refers to Host, Bus, Target and Lun

This method allows for all filesytems, volumes and arrays, except those immediately involved in specific device removal, to remain online and mounted. However, this method carries a high element of risk. Strong knowledge of the system and all storage layers/devices involved in the I/O path is required. Removal of an incorrect device or premature removal of an intended device may result in volume corruption or, in the presence of Clusterware (e.g. Oracle Cluster File System 2 (OCFS2) or Real Application Clusters (RAC)), could result in node eviction.

Use this method as required if using a 2.4 kernel. The 2.6 kernel provides an improved sysfs interface (/sys, described below) for managing devices.

The following example illustrates the removal, then addition of a SCSI device:
# lsscsi
[0:0:0:0] disk ATA IC35L040AVVA07-0 VA2O /dev/sda
[0:0:1:0] disk ATA WDC WD400BB-32CL 05.0 /dev/sdb
[1:0:0:0] disk ATA QUANTUM FIREBALL A1Y. /dev/sdc
[1:0:1:0] cd/dvd JLMS DVD-ROM XJ-HD166 DD05 /dev/sr0



# echo "scsi remove-single-device 1 0 0 0" > /proc/scsi/scsi

# lsscsi
[0:0:0:0] disk ATA IC35L040AVVA07-0 VA2O /dev/sda
[0:0:1:0] disk ATA WDC WD400BB-32CL 05.0 /dev/sdb
[1:0:1:0] cd/dvd JLMS DVD-ROM XJ-HD166 DD05 /dev/sr0

# echo "scsi add-single-device 1 0 0 0" > /proc/scsi/scsi

# lsscsi
[0:0:0:0] disk ATA IC35L040AVVA07-0 VA2O /dev/sda
[0:0:1:0] disk ATA WDC WD400BB-32CL 05.0 /dev/sdb
[1:0:0:0] disk ATA QUANTUM FIREBALL A1Y. /dev/sdd
[1:0:1:0] cd/dvd JLMS DVD-ROM XJ-HD166 DD05 /dev/sr0

Note that the removed device, originally known as /dev/sdc, was added back to the system, but as /dev/sdd. This highlights the potential risk associated with solely relying on arbitrary kernel-assigned device file names.
3.4 sysfs /sys/class/scsi_host/ (2.6 kernel)

The 2.6 kernel provides the /sys (sysfs) interface for interacting and managing system devices. In the case of SCSI devices, the /sys/class/scsi_host/ interface can be used to dynamically rescan a hostadapter, as well as add or remove specific devices.

To rescan a hostadapter:
# echo '- - -' > /sys/class/scsi_host/host<H>/scan

where <H> refers to the hostadapter or the instance of hostadapter where multiple (of the same type) exist on the system

To remove a specific device:
# echo 1 > /sys/class/scsi_host/host<H>/device/target<H>:<B>:<T>/<H>:<B>:<T>:<L>/delete

where <H> <B> <T> <L> refers to Host, Bus, Target and Lun

To add a specific device:
# echo "<B> <T> <L>" > /sys/class/scsi_host/host<H>/scan

where <H> <B> <T> <L> refers to Host, Bus, Target and Lun

Like the /proc/scsi/scsi interface, the /sys/class/scsi_host/ interface similarly allows for all filesytems, volumes and arrays, except those immediately involved in specific device removal, to remain online and mounted. Again, this method carries a high level of risk and requires a strong knowledge of the system and all storage layers/devices involved in the I/O path.

Where Fiber Channel (FC) Host Bus Adapters (HBA) are used, separate procfs and/or sysfs entries are created in various locations depending on Operating System and driver type and version used. In such cases, FC HBA driver level re-scan should precede SCSI Bus rescan. For example, for QLogic (qla2xxx):


Enterprise Linux 4:
# echo "scsi-qlascan" >> /proc/scsi/qla2xxx/<H>
# echo "scsi add-single-device <H> <B> <T> <L>" > /proc/scsi/scsi


Enterprise Linux 5:
# echo 1 > /sys/class/fc_host/host<H>/issue_lip
# echo '- - -' > /sys/class/scsi_host/host<H>/scan


The following example illustrates (non-FC HBA) SCSI hostadapter rescan, then removal and addition of a device:
# echo '- - -' > /sys/class/scsi_host/host1/scan



# lsscsi
[0:0:0:0] disk ATA IC35L040AVVA07-0 VA2O /dev/sda
[0:0:1:0] disk ATA WDC WD400BB-32CL 05.0 /dev/sdb
[1:0:0:0] disk ATA QUANTUM FIREBALL A1Y. /dev/sdc
[1:0:1:0] cd/dvd JLMS DVD-ROM XJ-HD166 DD05 /dev/sr0

[root@toxic ~]# echo 1 > /sys/class/scsi_host/host1/device/target1:0:0/1:0:0:0/delete

# lsscsi
[0:0:0:0] disk ATA IC35L040AVVA07-0 VA2O /dev/sda
[0:0:1:0] disk ATA WDC WD400BB-32CL 05.0 /dev/sdb
[1:0:1:0] cd/dvd JLMS DVD-ROM XJ-HD166 DD05 /dev/sr0

[root@toxic /]# echo '0 0 0' > /sys/class/scsi_host/host1/scan

# lsscsi
[0:0:0:0] disk ATA IC35L040AVVA07-0 VA2O /dev/sda
[0:0:1:0] disk ATA WDC WD400BB-32CL 05.0 /dev/sdb
[1:0:0:0] disk ATA QUANTUM FIREBALL A1Y. /dev/sdd
[1:0:1:0] cd/dvd JLMS DVD-ROM XJ-HD166 DD05 /dev/sr0

Once again, note that the name of the re-added device (/dev/sdd) differs to it’s original name (/dev/sdc).

The entries and names beneath /sys/class/scsi_host/ may vary depending on the operating system, kernel version and type of devices in use on the system. For example, if using iSCSI devices, additional directory entry session exists:
# lsscsi
[0:0:0:0] disk ATA WDC WD1600JS-75N 10.0 /dev/sda
[2:0:0:0] disk IET VIRTUAL-DISK 0 /dev/sdb
[3:0:0:0] disk IET VIRTUAL-DISK 0 /dev/sdc
[7:0:0:0] disk IET VIRTUAL-DISK 0 /dev/sdd
[10:0:0:0] disk IET VIRTUAL-DISK 0 /dev/sde



# ls -ld /sys/class/scsi_host/host*/device/session*/target*/[0-9]*
drwxr-xr-x 3 root root 0 Aug 7 22:35 /sys/class/scsi_host/host10/device/session8/target10:0:0/10:0:0:0
drwxr-xr-x 3 root root 0 Aug 7 22:35 /sys/class/scsi_host/host2/device/session0/target2:0:0/2:0:0:0
drwxr-xr-x 3 root root 0 Aug 7 22:35 /sys/class/scsi_host/host3/device/session1/target3:0:0/3:0:0:0
drwxr-xr-x 3 root root 0 Aug 7 22:35 /sys/class/scsi_host/host7/device/session5/target7:0:0/7:0:0:0


3.5 Host Adapter Vendor Supplied Scripts

Some hostadapter vendors supply their own scripts that can be used to scan, add and remove devices. These scripts may be used in favour of manual device interaction via native kernel interfaces. The scripts should, however, be thoroughly tested before reliance in production. Naturally, where provided by a third-party vendor, any issues arising from their use should be referred to the originating supplier.

Examples of vendor supplied scripts include:
ql-dynamic-tgt-lun-disc.sh (QLogic)
rescan-scsi-bus.sh (QLogic)
qlun_disc.sh (QLogic)
lun_scan (Emulex)
hp_rescan (HP)
proprietary and custom others

Contact your hostadapter vendor for latest available and recommended scripts.
4. Removing a Multipath Device by Example

Let’s look at a more complex, yet typical scenario involving multipathing (device-mapper-multipath).
In this instance, explicitly white-listed iSCSI-served partitioned target devices (LUNs) are multipathed on the initiator using user-defined names; ocr1, voting1, etc.
# dmsetup ls | sort
ocr1 (253, 5)
ocr1p1 (253, 9)
ocr2 (253, 6)
ocr2p1 (253, 10)
ocr3 (253, 7)
ocr3p1 (253, 11)
voting1 (253, 0)
voting1p1 (253, 3)
voting2 (253, 1)
voting2p1 (253, 4)
voting3 (253, 2)
voting3p1 (253, 8)



# multipath -ll
ocr3 (149455400000000000000000001000000ca0200000d000000) dm-7 IET,VIRTUAL-DISK
[size=980M][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 1:0:0:10 sdn 8:208 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:11 sdo 8:224 [active][ready]
ocr2 (149455400000000000000000001000000ed0200000d000000) dm-6 IET,VIRTUAL-DISK
[size=980M][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 1:0:0:8 sdl 8:176 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:9 sdm 8:192 [active][ready]
ocr1 (149455400000000000000000001000000e80200000d000000) dm-5 IET,VIRTUAL-DISK
[size=980M][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 1:0:0:6 sdj 8:144 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:7 sdk 8:160 [active][ready]
voting3 (149455400000000000000000001000000e30200000d000000) dm-2 IET,VIRTUAL-DISK
[size=965M][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 1:0:0:4 sdh 8:112 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:5 sdi 8:128 [active][ready]
voting2 (149455400000000000000000001000000de0200000d000000) dm-1 IET,VIRTUAL-DISK
[size=965M][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 1:0:0:2 sdf 8:80 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:3 sdg 8:96 [active][ready]
voting1 (149455400000000000000000001000000d90200000d000000) dm-0 IET,VIRTUAL-DISK
[size=965M][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 1:0:0:1 sde 8:64 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:0 sdd 8:48 [active][ready]

# cat /proc/partitions
major minor #blocks name

8 0 6291456 sda
8 1 5735173 sda1
8 2 554242 sda2
8 16 2097152 sdb
8 17 2096451 sdb1
8 32 2097152 sdc
8 33 2096451 sdc1
8 48 987966 sdd
8 49 987681 sdd1
8 64 987966 sde
8 65 987681 sde1
8 80 987966 sdf
8 81 987681 sdf1
8 96 987966 sdg
8 97 987681 sdg1
8 112 987966 sdh
8 113 987681 sdh1
8 128 987966 sdi
8 129 987681 sdi1
8 144 1004031 sdj
8 145 1003873 sdj1
8 160 1004031 sdk
8 161 1003873 sdk1
8 176 1004031 sdl
8 177 1003873 sdl1
8 192 1004031 sdm
8 193 1003873 sdm1
8 208 1004031 sdn
8 209 1003873 sdn1
8 224 1004031 sdo
8 225 1003873 sdo1
253 0 987966 dm-0
253 1 987966 dm-1
253 2 987966 dm-2
253 3 987681 dm-3
253 4 987681 dm-4
253 5 1004031 dm-5
253 6 1004031 dm-6
253 7 1004031 dm-7
253 8 987681 dm-8
253 9 1003873 dm-9
253 10 1003873 dm-10
253 11 1003873 dm-11

Below, unused multipath device ocr3 is dynamically removed from the system, as are its associated underlying devices. The devices were verified to no longer be in use or required by any program or service before their removal.

Note, whether the /sbin/multipath or /sbin/dmsetup command is used, the result is the same. However, when using the dmsetup command to remove partitioned multipathed devices, the multipath aliases for all of its partitions must first be removed before the multipath alias of the device itself can be removed, otherwise the command fails with a ‘device or resource busy error’ message. When using the multipath command to remove partitioned devices, removal of the multipath alias of the device will automatically remove all multipath aliases for all of its partitions.
# multipath -ll ocr3
ocr3(149455400000000000000000001000000ca0200000d000000) dm-7 IET,VIRTUAL-DISK
[size=980M][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 1:0:0:10 sdn 8:208 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:11 sdo 8:224 [active][ready]



# dmsetup ls | sort
ocr1 (253, 5)
ocr1p1 (253, 9)
ocr2 (253, 6)
ocr2p1 (253, 10)
ocr3 (253, 7)
ocr3p1 (253, 11)
voting1 (253, 0)
voting1p1 (253, 3)
voting2 (253, 1)
voting2p1 (253, 4)
voting3 (253, 2)
voting3p1 (253,

# cat /proc/partitions | grep -e 'sdo\|sdn\|dm-7\|dm-11'
8 208 1004031 sdn
8 209 1003873 sdn1
8 224 1004031 sdo
8 225 1003873 sdo1
253 7 1004031 dm-7
253 11 1003873 dm-11

# multipath -f ocr3
OR
# dmsetup remove ocr3p1
# dmsetup remove ocr3

# multipath -ll ocr3
#

# dmsetup ls | sort
ocr1 (253, 5)
ocr1p1 (253, 9)
ocr2 (253, 6)
ocr2p1 (253, 10)
voting1 (253, 0)
voting1p1 (253, 3)
voting2 (253, 1)
voting2p1 (253, 4)
voting3 (253, 2)
voting3p1 (253,

# cat /proc/partitions | grep -e 'sdo\|sdn\|dm-7\|dm-11'
8 208 1004031 sdn
8 209 1003873 sdn1
8 224 1004031 sdo
8 225 1003873 sdo1

# echo 1 > /sys/class/scsi_host/host1/device/session0/target1:0:0/1:0:0:10/delete
# echo 1 > /sys/class/scsi_host/host1/device/session0/target1:0:0/1:0:0:11/delete

# cat /proc/partitions | grep -e 'sdo\|sdn\|dm-7\|dm-11'
#


Note that the device-mapper devices previously associated with multipath device ocr3 (/dev/dm-7, /dev/dm-11) are removed, however the underlying device paths (/dev/sdn, /dev/sdo) remain until explicitly removed from the system.

Twitter Bird Gadget