Saturday, May 7, 2011
Sunday, April 3, 2011
SAP TSW and SCADA interface
SAP IS OIL Downstream module SAP TSW (Traders' and Schedulers' Workbench) can integrated with real time SCADA systems using SAP PI module. The actual hydrocarbon (Natural Gas, Crude oil) delivered through the bulk transportation systems (pipelines) can be monitored in real time. With the recent developments in real time interface standards from OPC foundation (www.opcfoundation.org) the real time data from OPC servers of various SCADA systems can be accessed.
Friday, April 1, 2011
Reference : India Infoline News Service
Petroleum Minister launches Asia’s first integrated Gas management system at GAIL India Infoline News Service / 09:06 , Apr 01, 2011 This Gas Management System is the third of its kind in the world after Petrobras in Brazil and Pemex in Mexico. R P N Singh, Minister of State for Petroleum & Natural Gas and Corporate Affairs today launched Asia’s First Integrated Gas Management System, implemented on SAP Platform, at the corporate office of GAIL (India) Limited in New Delhi. The Gas Management System will establish an integrated and enterprise wide system in place, covering GAIL’s entire pipeline network. All the information related to commercial transactions like gas volume, calorific value, consumption of gas, value of gas consumed, nominated quantity of the gas delivered, transmission tariff, price of gas, other costs and applicable taxes etc. are available on real time basis. The system will help in monitoring operational aspects of the pipeline on real time basis including information regarding network utilization, gas sales, volume transferred, revenue generation through gas sales and price variations. The system facilitates online invoicing which could be downloaded at customers’ end from the GMS portal directly, thus enabling better customer service and faster realisation. Gas Management System will enable online receipt of nomination and revision in nomination, real time SCADA interface, daily inventory and availability of invoices to all customers online as well as through email. Further, the inbuilt system scalability will allow configuration of complex allocation rules and subsequent addition of new pipelines. GAIL presently operated over 8,000 km trunk natural gas pipeline network in the country with a capacity to transport 160 MMSCMD of gas. The Company is implementing pipeline projects to add another 6,700 km of pipeline at an investment of Rs.30,000 crore over the next two years which will enhance the transmission capacity to around 300 MMSCMD and enable GAIL to reach out to customers in 16 states in the country.
SAP PI Integration with Pipe line Modeling Tool APPS (M/s. ATMOS, UK)
Using SAP PI, SAP TSW module can be integrated with Pipe line modeling tool (APPS) of M/s. ATMOS, UK. The transactional data can be sent to APPS for dynamic scheduling of pipeline and derive inventory in the pipelines.
Wednesday, March 30, 2011
SAP PI can access real time data from OPC Servers of SCADA systems
Data integration of SAP ECC - SAP PI - OPC DA Server (SCADA) is tested........ At last we could able to achieve seamless integration of SCADA system data with SAP ECC using SAP PI module. Following are the lessons learnt from the implementation, 1. OPC Xi SDK form the OPC foundation (www.opcfoundation.org) was very much useful 2. Reading data from OPC Server version OPC DA v2.02 & OPC DA v 3.0 are successfully tested 3. Writing of data back to SCADA system from SAP ECC was a bit challenge due to some missing details in WSDL 4. Performance : we could able to read 3000 process tags at a scan rate of less than 3 sec. 5. Built-in redundancy : Redundancy at OPC Servers of SCADA systems can be utilized and logic built around that in ECC works great. Other future scenarios we need to work on, 1. Utilizing OPC UA based implementation to build more robust communication with OPC XML DA web services 2. OS support for Windows Server platforms (The results from initial trials in Windows 7 are encouraging)
Wednesday, June 30, 2010
What is the purpose of data compression algorithm in Process Historian
Author - Roger Palmen
IT Consultant - MES at Logica
The advantages are clear: a data historian can process immense quantities of data. Any good historian can process thousands of data-points with sampling rates of up to once per millisecond. When you do the maths, sampling once every millisecond will amount to 31.5 billion samples per year. Use a small size to store takes about 2 bytes, so that will be 58.7Mb per year for a single datapoint. A small server will have 1000 points, but i have seen systems running up to 60000 to 70000. You will spend your days adding disk arrays. And storing is one thing, but how to access these huge amounts of data? For reference: the most-used historian (OSISoft PI Server) requires a 'simple' quadcore, 8GB mem, 2,2Tb Drivespace server to power a 1 million point server capturing 1 year of data. And that server will cost you much less than $10.000 in HW&OS. So that brings us to the disadvantages. As far as i'm concerned there aren't any... Why? You can throw all data away without loosing any of the INFORMATION contained in that data. Let's go to the practical examples. Compression algorithms generally compress on amplitude and frequency of the points that need to be stored. Let's look at the amplitude first. If you have a temperature gage in your process, that measures effectively at a one degree accuracy. That gage could be connected to a system that indicates the temperarature using 3 decimal digits. If we then throw away all differences smaller than 0.5 degrees, we loose a lot of data but no information, right? Same applies to frequency.Let's look at the fuel gage in your car. If you hit the brakes the fuel will slosh through the tank and will make the reading go up and down vividly. But is that really relevant? Looking at the general trend it should go down only a little every minute (except when refuelling ofcourse). So there is no need to capture all the details because you're just interested in the general trend. The theory behind this is the Nyquist–Shannon sampling theorem. Take a look at wikipedia for some details about that. In the real world there are a few rules of thumb that you can use to define the compression settings for each point. Using those you can easily reduce the data volume with 90% or more without loosing any information. To summarize: 1) Any substantional system cannot work well or cost-effective without compression algorithms. 2) When setup right, there are no theoretical drawbacks to using compression algorithms. 3) One exception: if you don't know what is relevant in your data, don't use compression. But then you're looking at research applications where you do not know before what is relevant or not.
IT Consultant - MES at Logica
The advantages are clear: a data historian can process immense quantities of data. Any good historian can process thousands of data-points with sampling rates of up to once per millisecond. When you do the maths, sampling once every millisecond will amount to 31.5 billion samples per year. Use a small size to store takes about 2 bytes, so that will be 58.7Mb per year for a single datapoint. A small server will have 1000 points, but i have seen systems running up to 60000 to 70000. You will spend your days adding disk arrays. And storing is one thing, but how to access these huge amounts of data? For reference: the most-used historian (OSISoft PI Server) requires a 'simple' quadcore, 8GB mem, 2,2Tb Drivespace server to power a 1 million point server capturing 1 year of data. And that server will cost you much less than $10.000 in HW&OS. So that brings us to the disadvantages. As far as i'm concerned there aren't any... Why? You can throw all data away without loosing any of the INFORMATION contained in that data. Let's go to the practical examples. Compression algorithms generally compress on amplitude and frequency of the points that need to be stored. Let's look at the amplitude first. If you have a temperature gage in your process, that measures effectively at a one degree accuracy. That gage could be connected to a system that indicates the temperarature using 3 decimal digits. If we then throw away all differences smaller than 0.5 degrees, we loose a lot of data but no information, right? Same applies to frequency.Let's look at the fuel gage in your car. If you hit the brakes the fuel will slosh through the tank and will make the reading go up and down vividly. But is that really relevant? Looking at the general trend it should go down only a little every minute (except when refuelling ofcourse). So there is no need to capture all the details because you're just interested in the general trend. The theory behind this is the Nyquist–Shannon sampling theorem. Take a look at wikipedia for some details about that. In the real world there are a few rules of thumb that you can use to define the compression settings for each point. Using those you can easily reduce the data volume with 90% or more without loosing any information. To summarize: 1) Any substantional system cannot work well or cost-effective without compression algorithms. 2) When setup right, there are no theoretical drawbacks to using compression algorithms. 3) One exception: if you don't know what is relevant in your data, don't use compression. But then you're looking at research applications where you do not know before what is relevant or not.
What is the difference between Process Historian and Relational Database
Good article from other RSS :
Relational Databases vs. Plant Data Historians—Which One Is Right For You?
July 19, 2007 By: Jack Wilkins
Today, sensors are everywhere. They do everything from counting parts on an assembly line to measuring the quality of products. But some of the biggest challenges occur after measurements have been made. At that point, you have to decide: Where do I collect the data, and how can I use it to improve my operations by decreasing variability and improving quality?
ChoicesIn the manufacturing arena, real-time operations require fast data collection for optimal analysis. Generally, manufacturing companies approach data collection in one of two ways: with a traditional relational database or with a plant data historian.
Each offers distinct advantages. A relational database is built to manage relationships, but a plant data historian is optimized for time-series data. For example, relational databases are great at answering a question such as: "Which customer ordered the largest shipment?" A plant data historian, however, excels at answering questions such as: "What was today's hourly unit production standard deviation?"
Relational DatabasesThis type of database is an ideal option for storing contextual or genealogical information about your manufacturing processes. The relational nature of the database provides a flexible architecture and the ability to integrate well with other business systems. When extending the functionality of a relational database for manufacturing applications, companies leverage its openness by creating and managing custom tables to store data that comes from multiple sources, such as other databases, manually entered values via forms, and XML files.
As relational databases mature, I see vendors improving their system's performance in transactional manufacturing applications, such as capturing data from an RFID reader. When capturing contextual information or time-series data from a small number of sensors, a relational database may work best.
Plant Data HistoriansOn the other hand, plant data historians are a perfect choice when you must capture data from sensors and other real-time systems because this type of repository uses manufacturing standards, such as OPC, that facilitate communications. With plant data historians, you can streamline implementation by using standard interfaces.
With most of these systems, there is little or no management or creation of data schema, triggers, stored procedures, or views. You can usually install and configure a plant data historian quickly without specialized services, such as custom coding or scripting for the installation.
Plant data historians are also designed to survive the harshness of the production floor and feature the ability to continue capturing and storing data even if the main data store is unavailable. Another feature typically found in a plant data historian is the ability to compress data, reducing the amount of drive space required. When capturing time-series data rapidly (with a re-read rate of less than 5 s) for several thousand sensors, a plant data historian may work best.
The Best of Both WorldsWhen relational databases and plant data historians are deployed in concert, companies can collect and analyze the tremendous volumes of information generated in their plants, improve performance, integrate the plant floor with business systems, and reduce the cost of meeting industry regulations. As stated by many Six Sigma quality experts, "You can't improve what you don't measure." Plantwide data collection can make this possible.
By using analysis tools, such as Microsoft Excel or other off-the-shelf reporting solutions, you can increase the quality and consistency of your products by comparing past production runs, analyzing the data prior to a downtime event, and plotting ideal production runs against in-process runs. Today's analysis tools make it easy to aggregate data, prepare reports, and share information using standard Web browsers.
Plantwide in-process data collection also serves as the vital link between plant processes and business operations, providing business systems with the data they need to gain a clear, accurate picture of current production status or historical trends.
Ultimately the decision should be to use a relational database and a plant data historian. The combined power of both provides the detailed information that yields numerous benefits, internally for the company and externally for customers.
Relational Databases vs. Plant Data Historians—Which One Is Right For You?
July 19, 2007 By: Jack Wilkins
Today, sensors are everywhere. They do everything from counting parts on an assembly line to measuring the quality of products. But some of the biggest challenges occur after measurements have been made. At that point, you have to decide: Where do I collect the data, and how can I use it to improve my operations by decreasing variability and improving quality?
ChoicesIn the manufacturing arena, real-time operations require fast data collection for optimal analysis. Generally, manufacturing companies approach data collection in one of two ways: with a traditional relational database or with a plant data historian.
Each offers distinct advantages. A relational database is built to manage relationships, but a plant data historian is optimized for time-series data. For example, relational databases are great at answering a question such as: "Which customer ordered the largest shipment?" A plant data historian, however, excels at answering questions such as: "What was today's hourly unit production standard deviation?"
Relational DatabasesThis type of database is an ideal option for storing contextual or genealogical information about your manufacturing processes. The relational nature of the database provides a flexible architecture and the ability to integrate well with other business systems. When extending the functionality of a relational database for manufacturing applications, companies leverage its openness by creating and managing custom tables to store data that comes from multiple sources, such as other databases, manually entered values via forms, and XML files.
As relational databases mature, I see vendors improving their system's performance in transactional manufacturing applications, such as capturing data from an RFID reader. When capturing contextual information or time-series data from a small number of sensors, a relational database may work best.
Plant Data HistoriansOn the other hand, plant data historians are a perfect choice when you must capture data from sensors and other real-time systems because this type of repository uses manufacturing standards, such as OPC, that facilitate communications. With plant data historians, you can streamline implementation by using standard interfaces.
With most of these systems, there is little or no management or creation of data schema, triggers, stored procedures, or views. You can usually install and configure a plant data historian quickly without specialized services, such as custom coding or scripting for the installation.
Plant data historians are also designed to survive the harshness of the production floor and feature the ability to continue capturing and storing data even if the main data store is unavailable. Another feature typically found in a plant data historian is the ability to compress data, reducing the amount of drive space required. When capturing time-series data rapidly (with a re-read rate of less than 5 s) for several thousand sensors, a plant data historian may work best.
The Best of Both WorldsWhen relational databases and plant data historians are deployed in concert, companies can collect and analyze the tremendous volumes of information generated in their plants, improve performance, integrate the plant floor with business systems, and reduce the cost of meeting industry regulations. As stated by many Six Sigma quality experts, "You can't improve what you don't measure." Plantwide data collection can make this possible.
By using analysis tools, such as Microsoft Excel or other off-the-shelf reporting solutions, you can increase the quality and consistency of your products by comparing past production runs, analyzing the data prior to a downtime event, and plotting ideal production runs against in-process runs. Today's analysis tools make it easy to aggregate data, prepare reports, and share information using standard Web browsers.
Plantwide in-process data collection also serves as the vital link between plant processes and business operations, providing business systems with the data they need to gain a clear, accurate picture of current production status or historical trends.
Ultimately the decision should be to use a relational database and a plant data historian. The combined power of both provides the detailed information that yields numerous benefits, internally for the company and externally for customers.
Subscribe to:
Posts (Atom)