Posts tagged Open Source
Network and system monitoring is a broad category. There are solutions that monitor for the proper operation of servers, network gear, and applications, and there are solutions that track the performance of those systems and devices, providing trending and analysis. Some tools will sound alarms and notifications when problems are detected, while others will even trigger actions to run when alarms sound. Here is a collection of open source solutions that aim to provide some or all of these capabilities.
Cacti is a very extensive performance graphing and trending tool that can be used to track just about any monitored metric that can be plotted on a graph. From disk utilization to fan speeds in a power supply, if it can be monitored, Cacti can track it — and make that data quickly available.
Nagios is the old guard of system and network monitoring. It is fast, reliable, and extremely customizable. Nagios can be a challenge for newcomers, but the rather complex configuration is also its strength, as it can be adapted to just about any monitoring task. What it may lack in looks it makes up for in power and reliability.
Icinga is an offshoot of Nagios that is currently being rebuilt anew. It offers a thorough monitoring and alerting framework that’s designed to be as open and extensible as Nagios is, but with several different Web UI options. Icinga 1 is closely related to Nagios, while Icinga 2 is the rewrite. Both versions are currently supported, and Nagios users can migrate to Icinga 1 very easily.
NeDi may not be as well known as some of the others, but it’s a great solution for tracking devices across a network. It continuously walks through a network infrastructure and catalogs devices, keeping track of everything it discovers. It can provide the current location of any device, as well as a history.
NeDi can be used to locate stolen or lost devices by alerting you if they reappear on the network. It can even display all known and discovered connections on a map, showing how every network interconnect is laid out, down to the physical port level.
Observium combines system and network monitoring with performance trending. It uses both static and auto discovery to identify servers and network devices, leverages a variety of monitoring methods, and can be configured to track just about any available metric. The Web UI is very clean, well thought out, and easy to navigate.
As shown, Observium can also display the physical location of monitored devices on a geographical map. Note too the heads-up panels showing active alarms and device counts.
Zabbix monitors servers and networks with an extensive array of tools. There are Zabbix agents for most operating systems, or you can use passive or external checks, including SNMP to monitor hosts and network devices. You’ll also find extensive alerting and notification facilities, and a highly customizable Web UI that can be adapted to a variety of heads-up displays. In addition, Zabbix has specific tools that monitor Web application stacks and virtualization hypervisors.
Zabbix can also produce logical interconnection diagrams detailing how certain monitored objects are interconnected. These maps are customizable, and maps can be created for groups of monitored devices and hosts.
Ntop is a packet sniffing tool with a slick Web UI that displays live data on network traffic passing by a monitoring interface. Instant data on network flows is available through an advanced live graphing function. Host data flows and host communication pair information is also available in real-time.
Mesosphere has closed new funding to help it bring the open-source Mesos software to a wider audience
Apache Mesos, a software package for managing large compute clusters that’s been credited with helping Twitter to kill its Fail Whale, is being primed for use in the enterprise.
Practical advice for you to take full advantage of the benefits of APM and keep your IT environment
One of its main backers, a startup called Mesosphere, announced Monday it has closed an additional $10.5 million round of funding and will use the money to develop new tools and support offerings to make Mesos more appealing to large businesses.
Mesos is open-source software originally developed at the University of California at Berkeley. It sits between the application layer and the operating system and makes it easier to deploy and manage applications in large-scale clustered environments.
Twitter adopted Mesos several years ago and contributes to the open-source project. The software helped Twitter overcome its scaling problems and make the Fail Whale — the cartoon symbol of its frequent outages — a thing of the past.
Mesosphere’s CEO, Florian Leibert, was an engineer at Twitter who pushed for its use there. He left Twitter a few years ago to implement Mesos at AirBnB, and last year he left AirBnB to cofound Mesosphere, which distributes Mesos along with documentation and tools.
On Monday Mesosphere said it had secured a new round of funding led by Silicon Valley investment firm Andreessen Horowitz. It will use the money to expand its commercial support team and develop Mesos plug-ins that it plans to license to businesses.
Mesos has several advantages in a clustered environment, according to Leibert. In a similar way that a PC OS manages access to the resources on a desktop computer, he said, Mesos ensures applications have access to the resources they need in a cluster. It also reduces a lot of the manual steps in deploying applications and can shift workloads around automatically to provide fault tolerance and keep utilization rates high.
A lot of modern workloads and frameworks can run on Mesos, including Hadoop, Memecached, Ruby on Rails and Node.js, as well as various Web servers, databases and application servers.
For developers, Mesos takes care of the base layer of “plumbing” required to build distributed applications, and it makes applications portable so they can run in different types of cluster environments, including on both virtualized hardware and bare metal.
It improves utilization by allowing operations staff to move beyond “static partitioning,” where workloads are assigned to a fixed set of resources, and build more elastic cluster environments.
The software is used today mostly by online firms like Netflix, Groupon, HubSpot and Vimeo, but Mesosphere will target large enterprises — “the Global 2,000,” Leibert said — that are wrestling with large volumes of data and struggling to manage it all at scale.
That includes customer data collected at busy websites and operational data gathered in the field. “A lot of organizations are under pressure to do things at scale, they’re running a lot of diverse applications and the wheels are coming off,” said Matt Trifiro, a senior vice president at Mesosphere.
Mesos can manage clusters in both private and public clouds, and in December Mesosphere released a tool for deploying Mesos to Amazon Web Services. Both AirBnB and HubSpot manage their Amazon infrastructures with Mesos.
Mesosphere will continue to provide its Mesos distribution for free, including tools it developed such as Marathon. On Monday it released an update to the core Mesos distribution, version 0.19, along with new documentation.
It plans to make money by developing and licensing plug-ins for Mesos for tasks like dashboard management, debugging, monitoring and security, and by selling professional services.
It has 25 full-time employees today, spread between Germany and San Francisco. “We’re building out our services operation as we speak,” Trifiro said.
Microsoft cheaper to use than open source software, UK CIO says
British government says every time they compare FOSS to MSFT, Redmond wins.
A UK government CIO says that every time government citizens evaluate open source and Microsoft products, Microsoft products forever come out cheaper in the long run.
Jos Creese, CIO of the Hampshire County Council, told Britain’s “Computing” publication that part of the cause is that most staff are already familiar with Microsoft products and that Microsoft has been flexible and more helpful.
“Microsoft has been flexible and obliging in the means we apply their products to progress the action of our frontline services, and this helps to de-risk ongoing cost,” he told the publication. “The tip is that the true charge is in the totality cost of ownership and exploitation, not just the license cost.”
Creese went on to say he didn’t have a particular bias about open source over Microsoft, but proprietary solutions from Microsoft or any other commercial software vendor “need to justify themselves and to work doubly hard to have flexible business models to help us further our aims.”
He approved that there are troubles on together sides. In some cases, central government has developed an undue dependence on a few big suppliers, which makes it hard to be confident about getting the best value out of the deal.
On the other hand, he is leery of depending on a small firm, and Red Hat aside, there aren’t that many large, economically hard firms in open source like Oracle, SAP, and Microsoft. Smaller firms often offer the greatest innovation, but there is a risk in agreeing to a significant deal with a smaller player.
“There’s a huge dependency for a large organization using a small organization. [You need] to be mindful of the risk that they can’t handle the scale and complexity, or that the product may need adaptation to work with our infrastructure,” said Creese.
I’ve heard this argue before. Open source is cheaper in gaining costs not easy to support over the long run. Part of it is FOSS’s DIY ethos, and bless you guys for being able to debug and recompile a complete app or distro of Linux, but not everyone is that smart.
The extra problem is the lack of support from vendors or third parties. IBM has done what no one else has the power to do. 20 after Linus first tossed his creation on the Internet for all to use, we still don’t have an open source equivalent to Microsoft or Oracle. Don’t say that’s a good thing because that’s only seeing it from one side. Business users will demand support levels that FOSS vendors can’t provide. That’s why we have yet to see an open source Oracle.
The part that saddens me is that reading Creese’s interview makes it clear he has more of a clue about technology than pretty much anyone we have in office on this side of the pond.
One of the most common things I see on a day-to-day basis when interacting with potential clients is confusion between machine translation and translation memory. I recently covered machine translation, so in the interest of equal coverage, I will now focus on translation memory.
Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com
Translation Memory (TM) is a tool that helps human translators to work more efficiently and with a higher degree of accuracy and quality.
So how does it work? At a high level, translation memory creates a relationship between a segment of source language text and a corresponding segment of target language text.
Here is an example:
You write the sentence, “My house is blue,” on your company website and translate the phrase into Spanish.
“My house is blue” is now linked in the translation memory system with its target language equivalent, “Mi casa es de color de azul.”
Why anyone would have a blue house, or would want to publish this on their website, defies logic, but work with me here, please (note: blue houses are completely normal and this post is not intended to offend anyone who lives in one).
The important thing is that the relationship between those two text segments is in place. Why is this important? For one, if that segment repeats itself across the site, it can be re-used automatically. So you are getting the benefit of accurate, human translation without having to pay for it more than once.
Since the segment is being re-used, you also have the benefit of consistent language. Language consistency is especially important to corporations for many reasons, ranging from maintaining brand voice in marketing content to increasing customer comprehension in informational content. Language is extremely subjective, meaning that content can be written or expressed in multiple ways by different authors and have the same connotation or meaning to the intended audience. The goal is to publish content that is consistent in the source language and then use translation memory tools to ensure that the translated equivalents are consistent, as well.
Another benefit of re-using language is that it increases language accuracy. Each time the technology leverages a previously approved phrase from a database, it removes a human being from having to do a manual process. Therefore, using best-practice translation technology not only increases efficiency, but also increases language accuracy, because it mitigates the risk of introducing an error for segments which have been previously translated.
Since the gating factor in getting content to market is the overall number of words that need to be translated, by reducing the amount of work that needs to be put through a human process, you can go live much faster since you are eliminating manual, repetitive effort.
Another concept of translation memory is “fuzzy matching.” This means that once your translation memory is created and updates are processed against it, the system can look for segments that are close matches (e.g. “My house is red”), so that the translators just need to make minor modifications to the existing target language segment as opposed to an entirely new translation.
We will get into the benefits of server-based translation memory versus desktop-based translation memory in a future post, but the key thing to remember is that this solution offers multiple benefits to the overall translation process.
So make sure that your vendor is using it, you’re made aware of your savings from it, and whatever translation memory is created becomes your intellectual property.
Now I am off to paint my house blue…