Our full technical support staff does not monitor this forum. If you need assistance from a member of our staff, please submit your question from the Ask a Question page.


Log in or register to post/reply in the forum.

Multiple .DAT output files


mwassmer Jul 5, 2012 10:02 PM

Can you recommend a good method for outputting a station's collected data to multiple output .DAT files at the end of each collection interval?

To be more precise, I'd like to do the following at the end of each collection interval:

1) Send the new data to a .DAT file using the "Create Unique File Name" option. My database would insert this file's data and then delete the .DAT file. In other words, this would be a temporary file that would only exist for the purpose of doing incremental database loads.

2) Append the new data to an archival .DAT file using the "Append to End of File" option. I would keep this file as a backup in case I lose the data in the database.

BTW, I do have LNDB, but it isn't quite meeting my needs. Eventually, I'd like to directly collect the data from the cache instead of using .DAT files, but I can't afford the LoggerNet SDK right now.

Thank you for your help.


Dana Jul 5, 2012 11:35 PM

I think there are a couple of ways to handle this:

* Store data using the Create Unique File Name option in LN.
Set up a batch file in Task Master to run after data collection from the datalogger, which appends the collected file to an existing, archive file. The problem with this approach is that you'll have to deal with headers in the archived file(either strip them off or don't store headers to begin with, but it's always a good idea to have headers with your data).

* Store data using the default option of Append to End of File. Then use Task Master to run a Split report after data collection. The Split report could using the Last Count option under [Offset/Options] button on the Input Files tab. With this option enabled, Split will begin processing data where it last left off. Then set the Output File's "If File Exists Then" option to Create New (or Overwrite). This will essentially give you a file with only the new data since last collection. Again, you would need to consider header information. Split has a grid on the output file tab where you create column headers, or you could use a batch file to append a file that has nothing but a header with the newly created Split file.

If you have the opportunity, I would like to hear further about why LNDB does not meet your needs. I am always interested in feedback so we can improve the product for future versions.

Thanks, Dana


mwassmer Jul 6, 2012 05:17 PM

Thank you very much for these suggestions, Dana. I'll given them a try.
Regarding LNDB, here are a few constructive criticisms:

1) The application behaves in unpredictable ways, which can compromise data quality. This wouldn't be an issue if the documentation was more thorough. For example, the docs should explain what happens when a user manually changes the meta tables. To solve these types of problems, either a) the documentation needs to include a lot more "if you do this, then here's what will happen" scenarios; or b) the source code should be made available so that power users can answer their own questions before encountering surprises.

2) The documentation should be much more thorough. For example, there is no description of the purpose of many of the meta columns and what the data means.

3) The application creates large quantities of "wide" tables, which make for difficult, poorly performing queries. It is very difficult to querying this type of data unless it is loaded into a relational data warehouse. The application should include a data warehousing option, or at least the documentation should explain how to load the OLTP data into a relational OLAP database.

4) All the data is loaded into the dbo schema by default. This should be customizable.

5) The data types are not customizable.

6) The application's options are generally not flexible enough. This would be fine if a library of .NET/Java classes were provided with LNDB so that power users could choose to use either the canned GUI options or program against the data cache themselves.

My fingers are crossed that CSI will make available a more modern, less expensive API for accessing with the data cache. In my opinion, it would make sense to package this with LNDB or make it available as a "premium" LNDB option.

* Last updated by: mwassmer on 7/6/2012 @ 11:19 AM *


jra Jul 6, 2012 10:15 PM

If you have an inexpensive robust communications link you can add a duplicate "station" to your LoggerNet setup something like this:
COM
---PBPort
-----CR1000_unique <<< set for unique data file names

COM
---PBPort
-----CR1000_append <<< set for unique appending to end of data file

You'll collect the data file(s) twice, hence the recommendation on an inexpensive, robust link.

Janet


mwassmer Jul 6, 2012 10:40 PM

Thank you very much for the suggestion. I was trying to avoid that approach, but it may end up being the best option. I do have an inexpensive, robust IP link.


Dana Jul 9, 2012 06:20 PM

1) the docs should explain what happens when a user manually changes the meta tables. To solve these types of problems, either a) the documentation needs to include a lot more "if you do this, then here's what will happen" scenarios; or b) the source code should be made available so that power users can answer their own questions before encountering surprises.

I do agree that there are probably a lot of things we could add to the documentation. It's hard to anticipate all the things that a user might try.

LNDB relies on the structure of the meta data tables. It's not intended that it would be changed by the user.

For the usual reasons, we do not make available source code for any of our products.

2) The documentation should be much more thorough. For example, there is no description of the purpose of many of the meta columns and what the data means.

We do have an internal document that outlines the meta data tables. If you would like to work with your normal contact here at CSI, we can get a copy of that to you. It is also something we can consider getting into the manual for a future release.

3) The application creates large quantities of "wide" tables, which make for difficult, poorly performing queries. It is very difficult to querying this type of data unless it is loaded into a relational data warehouse. The application should include a data warehousing option, or at least the documentation should explain how to load the OLTP data into a relational OLAP database.

The structure of the database tables created by LNDB are based on the tables coming from the datalogger. That's the best way to control what tables ultimately look like in the data base.

You may want to consider building triggers to populate a different data base table, based on the tables being populated in LNDB.

4) All the data is loaded into the dbo schema by default. This should be customizable.

If what you are trying to accomplish is picking and choose columns that comprise a table, this is something we can consider for a future version. As noted above, the data base tables created are a reflection of the data tables in the datalogger, so you can have some control there.

5) The data types are not customizable.

The data types in the data base are based on the data types in the datalogger tables. They are formatted to accommodate the full range of values that could be returned by the data type. We could look at relaxing some of the rules in this regard (for instance, allow the data types to be changed after the tables are created, within some set of boundaries).

6) The application's options are generally not flexible enough. This would be fine if a library of .NET/Java classes were provided with LNDB so that power users could choose to use either the canned GUI options or program against the data cache themselves.

My fingers are crossed that CSI will make available a more modern, less expensive API for accessing with the data cache. In my opinion, it would make sense to package this with LNDB or make it available as a "premium" LNDB option.

LNDB was designed to be a simple tool to facilitate getting data from the LoggerNet data cache to an SQL-based data base. As you note, we do have SDKs available for those who need more advanced options (admittedly, our SDKs are not .net based).

One thing to consider is that the LoggerNet data cache is not non-volatile and its size is limited (it's set up as ring memory), so the end-result of anything you do there should be to get that data to a more secure long-term storage -- data files or a data base.

Another thing to consider is that for any of our newer PakBus dataloggers that are available via IP, there is a web API (documented in the CRBasic help), that allows you to access data directly from the tables in the datalogger. This API is included in the datalogger operating system.

Thanks for the feedback; it helps us make our products better for all users. If you have specific questions about your use of LNDB that we can help with, contact the Applications Engineer who you typically work with and we'll try to provide suggestions.

Dana


mwassmer Jul 12, 2012 09:22 PM

Great thanks for the thoughtful responses, Dana! Your suggestions and insights are very helpful; I will proceed accordingly in several areas.

Log in or register to post/reply in the forum.