General Micro Systems has a policy that is simple but effective: to develop innovative products, manufacture quality products and deliver the very best customer service. Our equipment must conform to the customer’s requirements and work first time, every time, all the time.
GMS ensures that it performs to this standard by a continuing and continuous quality improvement process. It’s a question of corporate culture; through a total quality management system all staff at every level have been imbued with a commitment to quality.
GMS has been certified at ISO 9001 level for all of the last seventeen years; its work in military and aerospace programs is supported by meeting the highest standard of AS-9100C.
The business model could be described as “design and prototype”. Other people have the scale and efficiencies to manufacture the equipment that GMS designs. This means that General Micro Systems stays away from manufacturing and avoids the continuing round of major capital investment in production equipment with a limited life. Instead, GMS has manufacturing partners around the world who are selected for their quality systems and ability rapidly to increase or reduce the level of production. GMS describes this system as “virtual manufacturing”.
The benefit is clear: as customers’ needs vary, the GMS capability varies with it. An elastic manufacturing capacity reflects the realities of the marketplaces in which the company operates.
Research and development is a different matter. GMS focuses on continuous product development and improvement and is able to do so not least because it does not have to do keep watch also on a bulk manufacturing process. There is a close link between General Micro Systems and giants in the field such as Intel Intelligent Systems Group (for CPUs) and Intel Data Centre Group (for server technologies).
All of this is backed up by a well-staffed and responsive support facility that ensures close cooperation with customers and immediate and effective response to queries.
The computing requirements in oil, gas and mining exploration are similar to those of our military customers: rugged computers that will stand up to harsh conditions in the field, are light enough to be carried and show a very high performance to size ratio. In addition, they need to be light on power consumption.
It’s because of General Micro Systems’s high scores against those markers that GMS has become the supplier of choice in the exploration businesses. Conditions in the field are always challenging and sometimes hazardous, while computing needs are precise. VME provides an ideal solution to many of the problems faced and explains why companies such as Halliburton, Schlumberger and General Electric have been GMS stalwarts for some time.
The equipment we provide can be dumped in liquid, is proof against sparks and corrosion, withstands the harshest conditions and can be solar powered to operate a long way from a power source while remaining accurate in data collection and communication. The quest for green computing and energy efficiency is one where GMS already has a remarkable record and one we continue to explore.
All GMS products have RuggedCool technology, the most advanced cooling system in computing. With RuggedCool an Intel CPU, TjMax 105°C, can operate at full load in temperatures between -40 Celsius and +85 Celsius. The secret is an exclusive technology using a corrugated alloy slug with minimum thermal resistance as a heat spreader at the processor die. With the heat spread over a much larger area, a sealed chamber containing a liquid silver compound transfers the heat to the system’s enclosure. This is instead of the thermal gap pads taking heat from CPU to cold plate as in all other ruggedised computers.
For all our success, General Micro Systems does not rest on its laurels. Research and product development continue to ensure that it is always GMS that offers the combination of the latest technology in the smallest boxes with the most rugged resistance to anything the world can throw at them.
The US military has been a General Micro Systems customer for more than thirty years, reflecting GMS’s ability to deliver reliable and high performance devices off-the-shelf to meet a wide variety of needs. Major aerospace and defence programs serviced by GMS include missiles, helicopters, unmanned ground vehicles, wearable systems and Air Force One.
Of course, we take great interest in what it is that makes General Micro Systems a trusted supplier to the military. There are a number of important factors among which long product life is very significant. Upgradable processor technology means that replacement as part of the normal maintenance cycle is straightforward and low cost. The ruggedness of systems that have been tested in extreme environments on and off the battlefield is another.
Form factor is an expression often heard in computing; for those who have not come across it, the “form factor” describes the specification of the motherboard: dimensions, type of power supply, number of ports on the back panel and so forth. Standard form factors bring the advantage of interchangeability across both vendors and technology generations. The importance of form factor in military applications is that it decides what size the case will be and the military needs computers with the best possible size, weight and power (SWaP) ratio.
Perhaps even more important are the efficiency with which the devices are cooled (a computer that overheated in battle would be worse than useless) and the performance/power ratio.
GMS multi-stage regulators are designed for all GMS products to reduce power consumption while the patented RuggedCool technology has no equal as a cooling system anywhere in the world and also outperforms all competition in shock and vibration absorption.
GMS also works closely with the US Navy on, for example, Tomahawk weapon control and electronic warfare devices while collaborating with main contractors such as Lockheed Martin to wring unprecedented improvement and extend the life of the destroyer fleet.
GMS And Standards
February 8, 2016
Standards have been a topic of discussion in the computer business for almost as long as there has been a computer business. Perhaps that is changing.
The purpose of a standard is to allow any user of any computer anywhere in the world to remove a defective card and replace it. The replacement need not be the same model or even be made by the same company. All that matters is that it meets the same standard.
And yet, so often, that is not what happens. The customer finds that, although the new card is sold as meeting the standard, the pinout arrangement is different or the I/O connectors are not the same and all pretence at plug and play and swapability goes out the window.
Increasingly, we are finding that customers don’t mind this because they are more likely to swap out the entire unit, send it back for repair and plug a new one into the gap in its place. This is particularly true with embedded single board computers and in the military and industrial control applications in which GMS excels.
What GMS has done in response to this situation is to produce a family of products a quarter the size of an ATR box but with more I/O capabilities, more functionality and better heat dissipation. We hear two objections to this:
- It’s a modular system and it’s not a standard
- Because it’s proprietary, we can’t exchange bits with bits of other people’s kit.
Those objections are true but the component or card exchange problem also exists with systems that supposedly do meet the standard – so what’s the point of a standard that isn’t really a standard?
When we began with the VME bus, there was a lot more cooperation and collaboration between manufacturers and more respect paid to the concept of interchangeability. Since that’s disappeared, it seems pointless to pretend that we still have a universal standard. What GMS offers in its place is an open spec.
So What Is The VME Bus?
February 7, 2016
VME stands for Versa Module Europa. It’s a computer bus that was first created in 1981 for Motorola 68000 CPUs and then taken up for so many applications that the IEC standardised it and ANSI/IEEE followed suit with 1014-1987. It grew out of Eurocard standards but Eurocard does not define a signalling system and the VME bus developed its own.
The standard as originally developed was a 16-bit bus of a size to go into the DIN connectors of the Eurocard at that time. There have been a number of updates since then facilitating wider bus widths and the VME64 has a 64-bit bus and a 32-bit bus. Typically, VME64 performs at 40 MB/s. There is a hot-swap plug and play version (VME64x) and linkage standards that allow interconnection of VME systems. Synchronous protocols have also been developed.
Extensions have enabled “sideband” communication channels to run parallel to VME and these are largely available under proprietary brand names. StarFabric, InfiniBand and RapidIO are examples.
Developments growing out of the VMEbus have included VXIbus and STEbus. There are those who say – and, in fact, have been saying for a number of years – that the VMEbus has had its day as a leading OEM bus. Each time we hear that, it seems that someone develops a new application for which VMEbus is just right. Now, with the Intel Core 2 Duo processor coming on the scene, VME has a new lease of life. Its functionality for new applications has proved difficult for competing systems to outperform and it is still very reliable in cost and performance.
In addition, the large number of existing VME implementations (including, but not restricted to, military and industrial systems) need an upgrade path that won’t break the bank and the developing VME bus offers that. Take the flexible I/O of VME and add the performance of the new Intel processors and the question is not when will VME die but what on earth can take its place?
In the days before IBM launched the PC, there were many brands of microcomputer and incompatibility was rife. The software that ran on one device would not run on another; printers and scanners you could attach to one would not work with a different machine. For mainstream computing, Microsoft put an end to that, but at a price and it imposed a unity that still doesn’t hold in single board computing. Hence the need for open standards computing.
The PCI Industrial Computer Manufacturers Group (PICMG) is a group of more than 250 companies developing open standards for – among other things – embedded computing applications.
To talk about this subject, we also need to mention the ISA or Industry Standard Architecture bus. (Once again, a bus is what carries data and commands around a computer). Early IBM PCs used the ISA bus and its variant the AT bus but the PCI bus took over some twenty-five years ago. The PCI bus brings a number of advantages, but there is still a need for the older bus in single board embedded computers and one of the benefits that PICMG brings is that backplanes are available in a huge variety that includes support for the ISA bus.
What’s more, a backplane in a single board computer can have as many as twelve slots in any combination and that allows for a lot of I/O options.
General Micro Systems Inc (GMS) is a member of PICMG and, as well as being committed to open standards, is technology independent which means using the best CPU technology available and not being confined to one chip manufacturer. All that concerns GMS is “bang per watt” which the company defines as the processor’s aggregate performance divided by the power it uses. The search is not (necessarily) for the lowest power consumption or maximum performance. What GMS wants is the processor that gives us maximum efficiency.
Today when we talk about single board computers we are normally speaking of an architecture in which I/O cards can be inserted into a backplane and the single board computer goes into the same backplane. The backplane is, essentially, the bus – the pathway around which commands and data pass – and a series of pin connectors can be substituted for the bus.
The current generation of single board computers is seen most often in process control (where they are likely to be rack-mounted) or as control and interface devices when embedded in other devices. Single board computers are generally more reliable than multi-board computers doing the same job as well as being smaller and lighter and cheaper to run (because they use less power).
However, single board computers have to compete with ATX motherboards which are usually cheaper because they are made in huge numbers whereas single board computers, designed to fill a particular niche, can’t offer anything like the same economies of scale in manufacture.
A typical configuration used in process control might consist of CPU, bootstrap PROMs, two serial I/O ports and three bidirectional parallel I/O ports together with 64KB of memory and a programmable real-time clock. This computer would connect with a supervisory or host computer and the clock will be programmed to generate a signal at a rate of 1 Hz so that the process measurements are read and control signals sent out once every second.
A suitable language for a configuration of the sort we have just described used in process control would be Pascal, although Basic was also in common use for general applications.
The single board computer would be configured and programmed to read a number of process measurements, perform the control algorithm and send out control signals. The SPC would also signal to the host any change in the control parameters.
Single board computers have been around for forty years; essentially, they comprise a complete computer on a single circuit board. So, on that board, you will find at the very least the chip or microprocessor, memory and I/O (input/output facilities).
A single board computer might have expansion slots but doesn’t have to (and an embedded single board computer will not) and in some instances were themselves designed to fit into a computer’s backplane to expand a system.
Early home computers were often single board devices and other early uses were for demonstration, educational systems or as embedded controllers. Hobbyists often build their own computers with low-cost 8 or 16 bit processors and static RAM, while at the other extreme it’s possible to find blade servers that deliver demanding memory and processor performance from a single board.
As components have become smaller, so the possibilities offered by single board computers have multiplied. One recent development that has opened new development pathways for the single board computer has been the ready availability of SSD (Solid Storage Devices) instead of the more traditional hard disk drive. An SSD can give 256 MB of rapid retrieval data storage in a very small space.
Increasing density of integrated circuits (ICs) made single board computers possible and the benefits were soon apparent. One of the commonest causes of problems in early computing was found in connections between boards; if there is only one board there are no connectors and that source of problems disappears.
The very first single board computer was the “dyna-micro” developed in 1976 using the Intel C8080A processor and an Intel EPROM (Electronically Programmable Read Only Memory), the C1702A. The Acorn Electron and the BBC Micro were other early examples of single board computers.
An embedded single board computer has no ability to take plug-in cards so all the I/O is provided on the board. Typically, these are used for machine control or for gaming machines.
Since 1979, GMS has shipped more SBC products than any other supplier.