Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
Facing Session IDP refresh from UI while running workflows and transformations. Tried using client id and client secret. Facing this since yesterday Let me know if any changes required from our end
Hi, Is it possible to use multiple clients for the same Cognite Function? I know the handle function only takes one client as argument, but is it possible to initialize another client inside the function? I want my Cognite function to read from prod environment, but write to dev environment, which necessitates two clients. Thanks.
I am looking at 127 time series linked to one asset and I want to download the list of these time series as shown in the screenshot below, but this doesn’t appear to be straight forward.The download button circled in blue saves a JSON file linked only to the asset “11. QHP”. Is there a way to spare the user the effort of manually selecting and downloading each of the 127 timeseries and later reassemble them into one table like the one shown in the browser? It is not possible to select and display more than 20 columns in the browser … due to performance issues. This is not critical at this time, but still dissatisfying. I want to download everything wholesale and pick what I need from the list locally. Is there a way around this restriction? Solving issue #1 would remedy also #2, as I’d be able to join the tables locally again. Thanks
I’m working on a prototype for a flexible data model to store time series data in a way that is easy to catalogue, query and filter. Using Pygen both to populate and use the model seems convenient.At its current iteration, I’ve only applied direct relations and (undocumented?) @reverseDirectRelations in the GraphQL schema. I expected to be able do something similar to client.windmill(windfarm="Hornsea 1").blades(limit=-1).sensor_positions(limit=-1).query()as found in the Pygen documentation, but it does not work (my client.windmill analouge has no methods corresponding to its relations). Do I have to use edges instead of relations to query easily and declaratively with Pygen?
Are there any documented use cases or papers on integrating MLflow with Cognite, or is it something we need to implement ourselves?For example, if we aim to seamlessly integrate the MLflow UI with Cognite to evaluate and select the top-performing models, we could leverage SQLite, which operates on the local file system (e.g., mlruns.db) and provides a built-in client, sqlite3. However, our preference is to seamlessly integrate it with Cognite.
Hi all,I'm trying to increment my view version.To do this, I'm also incrementing my data model version as well.My original is```type Test @view(space: "test_space", version: "1_0"){ name: String!, description: String,}```and I try to go to```type Test @view(space: "test_space", version: "1_1"){ name: String!, description: String, test_write: String}```I've also tried ```type Test @view(space: "test_space", version: "1_2"){ name: String!, description: String, test_write: String, required_test_write: String!}```I get this error: { "title": "Error: could not update data model", "message": "An error has occured. Data model was not published.", "extra": { "type": "div", "key": "errors", "ref": null, "props": { "style": { "display": "block", "overflowX": "hidden", "overflowY": "auto", "maxHeight": "150px" }, "children": { "type": "ul", "key": null, "ref": null, "props": {
Within part 2 of the course - on the the page named “Exploring nodes, edges and direct relations” there is a table stating that Edges counts towards instance limits, but within point 4 of the Part 2 Summary the opposite is stated?
We are a bit unclear on the difference in meaning between the "Uploaded at" vs. "Last Updated" times for Files in CDF.For example, we have seen un-intuitive examples where the "Uploaded at" time is newer than the "Last Updated" time - we would expect that would never be the case.Can you define the logic for these two fields, and update the documentation here? https://cognite-sdk-python.readthedocs-hosted.com/en/latest/files.html#module-cognite.client.data_classes.filesThank you.
Is it possible to increase the limit on execution of workflow instances per project or limit on execution workflow instance itself instead of project ? as we require to schedule workflow, on that it depends how much data we get to run workflows instances, so it can be more than 50
What access capabilities do I need to run transformations as “Current User”. I have a user who don’t see “Run as current user” as in screen shot. Next screen shot is mine, and I can see it probably because I am added as admin
CDF - Filter option is not working as expected under common filters at Data Explorer screen. Login to CDF Click on Data Explorer tab in CDF menu bar. Click on Files tab in right side of the panel. Set Data set as 'src:006:documentum:b60:ds under Common filters in left side of the screen. Select the check box ‘Before’ under common filters in left side of the penal. Click on the Calendar icon and set data as (e.g.) '10-01-2023' Expected results: Document ‘Amarjeet_Test_DT.docx’ should not display in results window because its created after the set date.Actual results: Document Amarjeet_Test_DT.docx is displaying in CDFNote Issue exists for all Date filters like Created time, updated time with Before, After, During in CDF, user want to know what date is used for filtering the documents in CDF with these filters.
Hello, When I tried to run the DBExtractor, I get the following error: “polars\_cpu_check.py:232: RuntimeWarning: Missing required CPU features.The following required CPU features were not detected: avx, avx2, fmaContinuing to use this version of Polars on this processor will likely result in a crash.Install the `polars-lts-cpu` package instead of `polars` to run Polars with better compatibility.Hint: If you are on an Apple ARM machine (e.g. M1) this is likely due to running Python under Rosetta.It is recommended to install a native version of Python that does not run under Rosetta x86-64 emulation.If you believe this warning to be a false positive, you can set the `POLARS_SKIP_CPU_CHECK` environment variable to bypass this check.”After doing some googling, I was able to install the polars-lts-cpu package referenced using python, but I got the same error. I’m not sure how to make the extractor reference the polars-lts-cpu package when it runs. See attached screenshot.The extractor i
I am facing this error in the data science course on “creating cognite functions” with cognite SDK. In previous courses I had fixed this error by replacing the “datapoints” keyword with “time_series”. However, I would like to know if I am probably not using the write packages; OR not know if the commands are deprecated and have new function names in newer versions. Kindly let me know - I am trying to finish these courses before my local boot camp next week.Thanks,Lavanya
Data Workflow UI stopped rendering the workflow with the following error message. It stopped working as soon as I created a CDF function to trigger the workflow. https://delfi-us.fusion.cognite.com/shaya-dev/flows/SDM-ARP-Model-Refresh?cluster=az-eastus-1.cognitedata.com&env=az-eastus-1
Deployment of workflow , can you guide how we can automate deployment of cognite workflows using sdk
Is the following use case a good fit for Data Workflows?Cognite Function that is reading data from Time Series and writing "event-like" data into Data Modeling continuous processing for metric type A ideally an execution every minute, but more importantly no overlapping executions (i.e., if execution takes longer than a minute) passing state from one to another function execution hourly processing for metrics type B Idea:2 Data workflows:Hourly execution of trigger Cognite Function to calculate metric B Daily execution of a workflow that triggers Cognite Function that recursively outputs a dynamic task, which calculates metric A. The Function outputs information for the next execution run (i.e., timestamp for next execution, state, and other info). At the end of the day the dynamic task would not output a timestamp for the next execution and workflow would complete. The execution trigger could also be daily.
Hi Team, Is there any possibility that I can register for Bootcamp virtually or it can be in India location as I am from India.Also, want to know is the Bootcamp is for individual or group and the cost for attending the Bootcamp? Thanks,Navyasri Indupalli
After how much time retry of function get triggered on failing?Is there a way we can set the time to retrigger the function after function fails. It will be helpful to deal with rate limit in functions if any.
HiThere are some bugs when doing contextualization in the fusion GUI. It should be possible to “select all” when i do a search query in the interactive engineering diagrams contextualization workflow.
While testing versions of the cognite-sdk python >= 7.37.0, we noticed a performance issue in the "retrieve_dataframe" method of the time series API for all versions 7.37.x ( x >= 0 and x <= 3). Example: For the same data window, we tested with version 7.0.0 and the latest versions. The processing time for version 7.0.0 in this method was less than 5 seconds, while for newer versions the processing time was 3 minutes.
Hi! I’m trying to create a Power BI Dataflow using CDF timeseries data. The same data is retrieved absolutely fine when using Power BI Desktop and Power Query inside the Power BI Desktop app, but when I try to use exactly the same query as a Dataflow instead, there is a warning about: “The query “Timeseries” contains columns with complex types that cannot be loaded. Some data is retrieved, but it’s incomplete with big parts of it missing. Looks like the CDF connector is not able to retrieve the data correctly when a Dataflow is used instead of a normal Power BI Desktop dataset. Is there a way to make Dataflow also compatible with the CDF connector?
How does Cognite handle data encryption at rest? Is there any documentation available regarding this requirement? Additionally, concerning data encryption in transit, are there alternative approaches to TLS or MTLS?
Hello,I am trying to follow along the Entity matching notebook in the data Engineering learning module. I am having trouble generating COGNITE SECRET to run the notebook - kindly let me know if I may have missed a lesson. thanks,Lavanya
Hi Cognite team, We have been provided the cognite sandbox env, with a credit code. But we are unsure of how to access it.Can you please support on this.
Hello, I am trying to perform a calculation on Charts but it does not work. It is a multiplication of 4 time series, all of them with a resample to granularity to 1m. One of them has real values and 3 of them are 0/1 steps. When I zoom out, I can see the calculation and it shows a pick of values, that is not accurate as it shows average data. Although when I zoon in to see the data, the trend disappears, the error says “One of the time series has less than two values”. As I resampled the data to 1 minute, I was expecting to see one values per minute, but it does not seems to happen. Is there other way to calculate the data correctly? Zoom in:
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.