Class DatasourceQueryExecutor

  • All Implemented Interfaces:
    HistoryQueryExecutor

    public class DatasourceQueryExecutor
    extends AbstractHistoryLoader<java.lang.Integer>
    ************** CORE QUERY FUNCTIONS This system works by doing the following:
    1) Get tag meta: map paths to IDs, get SCIDs, etc between the range.
    2) Get the scan class exec records, so we can tell when things were running.
    3) Get the values between the range.
    4) Pass though the values, checking SCEs, setting the "current" value in the column, returning the latest value.
    Back up in the HistoryWriter...
    5) At the end of every "block", commit the current values. ***************
    • Constructor Detail

      • DatasourceQueryExecutor

        public DatasourceQueryExecutor​(@Nonnull
                                       GatewayContext context,
                                       @Nonnull
                                       QueryController controller,
                                       @Nonnull
                                       java.util.List<ColumnQueryDefinition> colDefs,
                                       java.lang.String datasource,
                                       @Nullable
                                       java.lang.String gatewayName,
                                       java.lang.String providerName)
    • Method Detail

      • getQueryCache

        protected QueryCache getQueryCache()
      • getReadQuery

        protected java.lang.String getReadQuery​(int tagids,
                                                java.lang.String partitionName,
                                                boolean includeVType)
        Returns a read query for the specified number of tag id parameters.
      • getColumnNodes

        public java.util.List<? extends HistoryNode> getColumnNodes()
        Description copied from interface: HistoryQueryExecutor
        Returns the HistoryNodes of this executor. There MUST be one for every tag path, and they must be in the same order as the paths provided to the executor when it was created. Also, these values must be available as soon as the executor is created. However, they won't be consulted for their data type until after initialize is called, so the normal procedure is to create and return DelegatingHistoryNodes, which get filled in later.
      • getPathPositions

        protected java.util.Collection<java.lang.Integer> getPathPositions​(java.lang.String path)
        Returns the positions of columns for the given path.
      • getColumns

        protected java.util.Collection<HistoryNode> getColumns​(java.lang.String path)
      • buildColumns

        protected void buildColumns()
      • getNodeAt

        protected HistoryNode getNodeAt​(int pos)
      • getColumnAt

        protected HistoryColumn getColumnAt​(int pos)
        Returns HistoryColumn, or null if the node isn't a column, which shouldn't happen after 7.8.2. Previous to that ErrorColumn was erroneously not an actual HistoryColumn
      • getEffectiveWindowSizeMS

        public int getEffectiveWindowSizeMS()
        Description copied from interface: HistoryQueryExecutor
        When "natural" results are requested, this will be called to let the query executors say what they think "natural" means. If a query executor does not support natural results, it should return -1. If all of the query executors return -1, a raw query will be performed.
      • getSeedIdMap

        protected com.google.common.collect.Multimap<java.lang.String,​java.lang.Integer> getSeedIdMap​(boolean post)
      • getDataTypeForTag

        protected DataTypeClass getDataTypeForTag​(java.lang.Integer tagId)
      • getDataTypeForTagColumn

        protected DataTypeClass getDataTypeForTagColumn​(java.lang.Integer tagId)
      • maybeSortTagInfo

        protected void maybeSortTagInfo()
      • getIdsForTime

        protected java.util.Set<java.lang.Integer> getIdsForTime​(long start,
                                                                 long end,
                                                                 boolean compressedOnly)
        Returns the tag ids that were in use during the span of time (inclusive). If end==0, it will return the ids for the seed values.
      • getIdsForPostQuery

        protected java.util.Set<java.lang.Integer> getIdsForPostQuery​(long end)
      • getPotentiallyStaleDatapoint

        protected HistoricalValue getPotentiallyStaleDatapoint​(long valueTS)
        Returns a stale datapoint for the appropriate time if the system was down sometime between the last value to now.
      • initialize

        public void initialize()
                        throws java.lang.Exception
        The primary task of initialize is to load the information about the tags and update the columns, so that we'll be ready to query.
        Specified by:
        initialize in interface HistoryQueryExecutor
        Overrides:
        initialize in class AbstractHistoryLoader<java.lang.Integer>
        Throws:
        java.lang.Exception
      • getValueFromRS

        protected RawTagValue getValueFromRS​(java.sql.ResultSet rs)
                                      throws java.lang.Exception
        Throws:
        java.lang.Exception
      • getValueFromDS

        protected RawTagValue getValueFromDS​(Dataset ds,
                                             int rowId)
                                      throws java.lang.Exception
        Throws:
        java.lang.Exception
      • readSeedValues

        protected java.util.SortedSet<RawTagValue> readSeedValues()
                                                           throws java.lang.Exception
        This function reads the "seed" values- the last value for each tag before the start time.
        Throws:
        java.lang.Exception
      • readCompletionValues

        protected java.util.SortedSet<RawTagValue> readCompletionValues()
                                                                 throws java.lang.Exception
        Throws:
        java.lang.Exception
      • runSpecialValueQuery

        protected java.util.SortedSet<RawTagValue> runSpecialValueQuery​(java.lang.String query,
                                                                        java.lang.Long timeParam,
                                                                        java.util.List<Partition> partitions,
                                                                        com.google.common.collect.Multimap<java.lang.String,​java.lang.Integer> toReadSet)
                                                                 throws java.lang.Exception
        Throws:
        java.lang.Exception
      • primeRead

        protected void primeRead()
                          throws java.lang.Exception
        Description copied from class: AbstractHistoryLoader
        This function starts the read process, and gets the system ready for the first call to readNextFromSource
        Specified by:
        primeRead in class AbstractHistoryLoader<java.lang.Integer>
        Throws:
        java.lang.Exception
      • advancePartition

        protected boolean advancePartition()
                                    throws java.lang.Exception
        Queries data and sets up valueRS. Returns FALSE if no data is available. Returns TRUE if data is found. Important - if TRUE is returned, valueRS.next() will have been called, so first value is ready to be read.
        Throws:
        java.lang.Exception
      • usingPreProcessed

        protected boolean usingPreProcessed()
      • readPreProcessedSet

        protected IdentifiedHistoricalValue<java.lang.Integer> readPreProcessedSet()
                                                                            throws java.lang.Exception
        This function reads a set of processed values. Processed values consist of "min, max, avg, entry, exit" (which might be combined), or potentially just a single "direct". Average and Direct are both identified by flags=0.

        All values are now stored with the same timestamp, as their order can be determined by the flags. However, for 7.4 & 7.5 this wasn't true- in those versions min and max had different times. So, we have to do a bit of work to make sure we only read our block. We read until the time progresses by too much (1/2 block size).

        Throws:
        java.lang.Exception
      • readSingle

        protected IdentifiedHistoricalValue<java.lang.Integer> readSingle()
                                                                   throws java.lang.Exception
        Throws:
        java.lang.Exception