An Easy Way to Export / Import Scheduled Reports from Skedler

Here are the highlights of what’s new and improved in Skedler Reports 4.20.0. For detailed information about this release, check the release notes.

In Skedler Reports 4.20.0 we are now able to import and export the scheduled reports from one user to another user with ease. 

Export and Import anonymous user to skedlerAdmin user

To do the export/import in your existing skedler for anonymous users and an admin user please follow the below steps, prior to that could you please back up your skedler index or internal DB for safety purposes.

To know how to back up the Skedler index or internal DB please click here

1.Please select all the scheduled reports in the dashboard by clicking the select all option check box in the anonymous user

2. Now pause all the scheduled reports which you are going to import in the admin user login.

3. Once you click the export button, the _reports.json file will be generated. 

4. Now go to the reporting.yml file and navigate to the SKEDLER SECURITY SETTINGS. Uncomment skedler_anonymous_access to set the value to “false”

5. Restart skedler and enter the credentials and now go to the scheduled reports skedlerAdmin user and click import.

6. Open the _reports.json file and all the reports which were showing in anonymous user will also be showing in the admin user.

7. Now resume the scheduled report and check whether the reports are generated. 

Note: If you wish to re-import the _reports.json file. Select all the reports and delete them. Also, delete the burst filters and templates if they are added from the JSON file.

Skedler Reports v4.19.0 & Alerts v4.9.0 now supports ELK 7.10

Here are the highlights of what’s new and improved in Skedler Reports 4.19.0 & Alerts 4.9.0. For detailed information about this release, check the release notes.

Indexing speed improvement

Elasticsearch 7.10 improves indexing speed by up to 20%. We’ve reduced the coordination needed to add entries to the transaction log. This reduction allows for more concurrency and increases the transaction log buffer size from 8KB to 1MB. However, performance gains are lower for full-text search and other analysis-intensive use cases. The heavier the indexing chain, the lower the gains, so indexing chains that involve many fields, ingest pipelines or full-text indexing will see lower gains which can now be utilized in Skedler v4.19.0.

More space-efficient indices

Elasticsearch 7.10 depends on Apache Lucene 8.7, which introduces higher compression of stored fields, the part of the index that notably stores the _source. On the various data sets that we benchmark against, we noticed space reductions between 0% and 10%. This change especially helps on data sets that have lots of redundant data across documents, which is typically the case of the documents that are produced by our Observability solutions, which repeat metadata about the host that produced the data on every document.

Elasticsearch offers the ability to configure the index.codec setting to tell Elasticsearch how aggressively to compress stored fields. Both supported values default and best_compression will get better compression with this change.

Data tiers

7.10 introduces the concept of formalized data tiers within Elasticsearch. Data tiers are a simple, integrated approach that gives users control over-optimizing for cost, performance, and breadth/depth of data. Prior to this formalization, many users configured their own tier topology using custom node attributes as well as using ILM to manage the lifecycle and location of data within a cluster.

With this formalization, data tiers (content, hot, warm, and cold) can be explicitly configured using node roles, and indices can be configured to be allocated within a specific tier using index-level data tier allocation filtering. ILM will make use of these tiers to automatically migrate data between nodes as an index goes through the phases of its lifecycle.

Newly created indices abstracted by a data stream will be allocated to the data_hot tier automatically, while standalone indices will be allocated to the data_content tier automatically. Nodes with the pre-existing data role are considered to be part of all tiers.

AUC ROC evaluation metrics for classification analysis

The area under the curve of the receiver operating characteristic (AUC ROC) is an evaluation metric that has been available for outlier detection since 7.3 and now is available for classification analysis. AUC ROC represents the performance of the classification process at different predicted probability thresholds. The true positive rate for a specific class is compared against the rate of all the other classes combined at the different threshold levels to create the curve.

Custom feature processors in data frame analytics

Feature processors enable you to extract process features from document fields. You can use these features in model training and model deployment. Custom feature processors provide a mechanism to create features that can be used at search and ingest time and they don’t take up space in the index. This process more tightly couples feature generation with the resulting model. The result is simplified model management as both the features and the model can easily follow the same life cycle.

Points in time (PITs) for search

In 7.10, Elasticsearch introduces points in time (PITs), a lightweight way to preserve index state over searches. PITs improve the end-user experience by making UIs more reactive supported by Skedler v4.19.0

By default, a search request waits for complete results before returning a response. For example, a search that retrieves top hits and aggregations returns a response only after both top hits and aggregations are computed. However, aggregations are usually slower and more expensive to compute than top hits. Instead of sending a combined request, you can send two separate requests: one for top hits and another one for aggregations. With separate search requests, a UI can display top hits as soon as they’re available and display aggregation data after the slower aggregation request completes. You can use a PIT to ensure both search requests run on the same data and index state.

New thread pools for system indices

We’ve added two new thread pools for system indices: system_read and system_write. These thread pools ensure system indices critical to the Elastic Stack, such as those used by security or Kibana, remain responsive when a cluster is under heavy query or indexing load.

system_read is a fixed thread pool used to manage resources for reading operations targeting system indices. Similarly, system_write is a fixed thread pool used to manage resources for write operations targeting system indices. Both have a maximum number of threads equal to 5 or half of the available processors, whichever is smaller.

Export your Kibana Dashboard to PDF Report in Minutes with Skedler. Fully featured 21-day free trial.

Skedler Reports v4.18.0 now supports Grafana 7.3.0

Here are the highlights of what’s new and improved in Skedler Reports 4.18.0 & Alerts 4.10.0. For detailed information about this release, check the release notes.

Table improvements and new image cell mode

The table has been updated with improved hover behavior for cells that have longer content than what fits the current column width. Another new feature that can be seen in the image above is the new image cell display mode. If you have a field value that is an image URL or a base64 encoded image you can configure the table to display it as an image. 

Table color scheme

Another thing to highlight is that all these new color schemes are theme aware and adapt to the current theme. As this new option is a standard field option it works in every panel which is supported in Skedler v4.18.1

Shorten URL for dashboards and Explore

The new share shortened link capability allows you to create smaller and simpler URLs of the format /goto/:uid instead of using longer URLs that can contain complex query parameters. In Explore, you can create a shortened link by clicking on the share button in the Explore toolbar. In the dashboards, a shortened URL option is available through the share panel or dashboard button.

Auditing

Auditing tracks important changes to your Grafana instance to help you manage and mitigate suspicious activity and meet compliance requirements. Grafana logs events (as JSON) to file or directly to Loki.

Skedler Reports v4.17.0 now supports Grafana 7.2.0

Here are the highlights of what’s new and improved in Skedler Reports 4.17.0 & Alerts 4.9.0. For detailed information about this release, check the release notes.

New date formatting options added

You can now customize how dates are formatted in Grafana. Custom date formats apply to the time range picker, graphs, and other panel visualizations which is now supported for Skedler.

Generate Report from Kibana in Minutes with Skedler. Fully featured 21-day free trial.

This screenshot shows both a custom full date format with a clock and am / pm suffix. The graph also shows the same 12-hour clock format and a customized month and day format compared to the Grafana default MM/DD format.

Date formats are set for a Grafana instance by adjusting server-wide settings in the Grafana configuration file. We hope to add org- and user-level settings in the future.

[date_formats]

full_date = MMM Do, YYYY @ hh:mm:ss a

interval_second = hh:mm:ss a

interval_minute = hh:mm a

interval_hour = MMM DD hh:mm a

interval_day = MMM DD

interval_month = YYYY-MM

interval_year = YYYY

Field options are now available with full swing!

Table Column filters added

You can now dynamically apply value filters to any table column. This option can be enabled for all columns or one specific column using an override rule.

New field override selection options

You can now add override rules that use a regex matcher to choose which fields to apply rules to. The Field options content has been updated as a result of these changes.

New transformations and enhancements

Grafana 7.2 includes the following transformation enhancements supported to Skedler,

  • A new Group By transformation that allows you to group by multiple fields and add any number of aggregations for other fields.
  • The Labels to field transformation now allows you to pick one label and use that as the name of the value field.
  • You can drag transformations to reorder them. Remember that transformations are processed in the order they are listed in the UI, so think before you move something!

Drag to reorder queries

The up and down arrows, which were previously the only way to change query order, have been removed. Instead, there is now a grab icon that allows you to drag and drop queries in a list to change their order.

Inspect queries in Explore

The query inspector information provided in your dashboards can now also be reviewed in Explore. You can open the query inspector tab by clicking the button next to the query history.

$__rate_interval for Prometheus

You can now use the new variable $__rate_interval in Prometheus for rate functions. $__rate_interval, in general, is one scrape interval larger than $__interval but is never smaller than four times the scrape interval (which is 15s by default).

Export your Kibana Dashboard to PDF Report in Minutes with Skedler. Fully featured 21-day free trial.
Translate »