Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
Facing Session IDP refresh from UI while running workflows and transformations. Tried using client id and client secret. Facing this since yesterday Let me know if any changes required from our end
Hi everybody.My name is Carlos, I'm Software Quality Assurance and I just started CDF. I'm so excited to be here and be part of this community!
Hi everyone,My name is Gustavo Alvarez, and I have 25 years of experience in the field of automation and control, along with a strong background in industrial process management. Throughout my career, I have had the opportunity to work on a variety of projects, ranging from implementing automated systems to optimizing complex industrial processes.I am very excited to start the CDF course from Cognite. I am confident that the skills and knowledge gained from this course will be invaluable in advancing my career and making significant contributions to future projects in the field of automation and control. Additionally, I am particularly interested in learning about the application of Generative AI in the industry and how it can transform and optimize industrial processes as part of the ongoing digital transformation.Best regards,
Hi. In our data models, we typically have a few attributes that contain only a sinlge float. Say, the “installedCapacity” of a power plant. It would be useful to have this float be a “float with unit” type - preferably connected to the unit catalog for simple conversion.I assume that right now we’ll have to define the “float with unit” as our own custom type, but would be a good idea to have this value type come as a standard type?
Hi,In the documentation it is mentioned that 10,000 timeseries is the limit in each subscription. Is this the hard limit or it can be changed based on use cases? We are using CDF workflow and in that our first function to read data points subscriptions to get the timeseries and datapoints. Based on our use cases, 10,000 number for timeseries in subscription is too small. We are considering 10,000 wellbores and 200 properties. (10,000 * 200 = 2 Million timeseries). Any recommendations?
Checking the online documentation for PI and PI-AF extractors, you will only find a single version of PI-AF-SDK which is supported by them: PI AF Client 2018 SP3 Patch 3. But it wont say which versions of PI Data Archive and PI-AF the extractors are compatible with. It would be great if the docs could mention the range of versions of those two components that are supported.
I’m working on a prototype for a flexible data model to store time series data in a way that is easy to catalogue, query and filter. Using Pygen both to populate and use the model seems convenient.At its current iteration, I’ve only applied direct relations and (undocumented?) @reverseDirectRelations in the GraphQL schema. I expected to be able do something similar to client.windmill(windfarm="Hornsea 1").blades(limit=-1).sensor_positions(limit=-1).query()as found in the Pygen documentation, but it does not work (my client.windmill analouge has no methods corresponding to its relations). Do I have to use edges instead of relations to query easily and declaratively with Pygen?
Problem Description: Cognite File Extractor for Sharepoint is not able to recursively retrieve documents from a Sharepoint site where site and sub-sites, e.g. Document Control System Alternatives explored: Provide Sharepoint URL as mentioned in documentation for site (Sharepoint Online | Cognite Documentation) Results: no errors, however, no documents are loaded Evaluation: recursion does not occur from site-level down to the document or folder level where Sharepoint stores documents Provide document library and folder / sub-folder Results: working, however, sustainability not ideal Evaluation: will require potentially >100 URLs to be configured and kept up-to-date in case of new document libraries or folders created or deleted Recommended enhancement Recursion from root node (Sharepoint site). E.g. if site URL is provided recursion should go through each sub-site (if present) to document libraries and sub-folders to retrieve documents This should be treated as high-prio
I am looking at 127 time series linked to one asset and I want to download the list of these time series as shown in the screenshot below, but this doesn’t appear to be straight forward.The download button circled in blue saves a JSON file linked only to the asset “11. QHP”. Is there a way to spare the user the effort of manually selecting and downloading each of the 127 timeseries and later reassemble them into one table like the one shown in the browser? It is not possible to select and display more than 20 columns in the browser … due to performance issues. This is not critical at this time, but still dissatisfying. I want to download everything wholesale and pick what I need from the list locally. Is there a way around this restriction? Solving issue #1 would remedy also #2, as I’d be able to join the tables locally again. Thanks
CDF File Extractor should support “Delete” scenario for Sharepoint file extractor. If a file is deleted from Sharepoint, File extractor should be able to reconcile differences with what has already been loaded to CDF. With this feature, we are able to maintain sychronization between source and destination.
Hi, Is it possible to use multiple clients for the same Cognite Function? I know the handle function only takes one client as argument, but is it possible to initialize another client inside the function? I want my Cognite function to read from prod environment, but write to dev environment, which necessitates two clients. Thanks.
HelloMy name is Bjørnar Myhre Ås and I have just started the basic training. I have been working for SLB for 19 years and am very excited to see what CDF can do, especially for the industry I work in
The current selection of the target data model when creating in a transformation is sort of a lazy load:Only these three data models are available when dropping down. There are dozens of data models in the project However, as the project grows, the loading of the data models can take too long, since it’s very possible to exist dozens of data models, and it loads the data models one by one. Also if you close the pop up it will start over. I suspect this is because it’s fetching all the information about the data models - all versions and views available - to fill the other options in other fields. And that could be too much to load all at once. The proposal is to fetch all data models at once, but only their names. And then once the data model is selected, fetch the rest of the information from that data model only. This way we can have a more responsive UI when selecting these and reducing the amount of unecessary data we are fetching.
As an operator, I want to filter the tasks on a specific checklist, so it would be easier to find specific tasks in large checklists. This is a suggestion from Celanese users: being able to search for specific tasks. This will be useful specially for large checklists, that the operator has to go through the unit taking readings. Not always they do these readings in a specific path, they walk around randomly and fill the tasks as they go. Finding the tasks is somehow difficult in those situations, they have to keep scrolling and looking for the reading they are taking at the moment.
There are cases on the checklist tasks execution that technicians and operators need to select more than one option from the Check Items buttons created. For example, we could have multitple reasons for a task be ‘Not Ok’. But currently, the users can only select one button from the Check Items created. Check the example below: All the options are ‘Not Ok’ and in this example, it could be one or the 4 options. We may consider use a ‘Message’ field, but it’s not the ideal because users can insert anything on the Reply field.
The Cognite Pi extractor is case sensitive and it will only pick up tags with the exact name as they are in the OSI Pi server data source. This is causing major issues for our data ingestion process since the tags shared by the use case proponent are not entirely the same as on the Pi server itself (may have different capitalization of letter through its name). For example:- Shared tag name: A01AAB0A.pv- Tag name in PI server: a01AAB0A.PVIn this situation, the following config would fail to ingest the listed tag: extractor: include-tags: - "A11AAB0A.pv"It would be great if the PI extractor could have a case-sensitive: true/false flag to allow case-insensitive tags filtering for ingestion.For example, something like this would allow the listed tag to be ingested: extractor: case-sensitive: false include-tags: - "A11AAB0A.pv"We often fail to ingest timeseries with this kind of discrepancy, which causes inconvenience on the user side
Are there any documented use cases or papers on integrating MLflow with Cognite, or is it something we need to implement ourselves?For example, if we aim to seamlessly integrate the MLflow UI with Cognite to evaluate and select the top-performing models, we could leverage SQLite, which operates on the local file system (e.g., mlruns.db) and provides a built-in client, sqlite3. However, our preference is to seamlessly integrate it with Cognite.
We need Semantic Search to do searches at least 500 documents in one shot. We also need simalility index on the results as well as handle pdf and microsoft office files….
Hi all,I'm trying to increment my view version.To do this, I'm also incrementing my data model version as well.My original is```type Test @view(space: "test_space", version: "1_0"){ name: String!, description: String,}```and I try to go to```type Test @view(space: "test_space", version: "1_1"){ name: String!, description: String, test_write: String}```I've also tried ```type Test @view(space: "test_space", version: "1_2"){ name: String!, description: String, test_write: String, required_test_write: String!}```I get this error: { "title": "Error: could not update data model", "message": "An error has occured. Data model was not published.", "extra": { "type": "div", "key": "errors", "ref": null, "props": { "style": { "display": "block", "overflowX": "hidden", "overflowY": "auto", "maxHeight": "150px" }, "children": { "type": "ul", "key": null, "ref": null, "props": {
Hello, That would be nice to be able to use the asset external IDs (instead of the assetId field) in transformations, to link events (or timeseries, sequences) to existing assets.
Within part 2 of the course - on the the page named “Exploring nodes, edges and direct relations” there is a table stating that Edges counts towards instance limits, but within point 4 of the Part 2 Summary the opposite is stated?
No questions yet! Just excited to start the CDF Journey. :)Bert Greeby
We are a bit unclear on the difference in meaning between the "Uploaded at" vs. "Last Updated" times for Files in CDF.For example, we have seen un-intuitive examples where the "Uploaded at" time is newer than the "Last Updated" time - we would expect that would never be the case.Can you define the logic for these two fields, and update the documentation here? https://cognite-sdk-python.readthedocs-hosted.com/en/latest/files.html#module-cognite.client.data_classes.filesThank you.
Is it possible to increase the limit on execution of workflow instances per project or limit on execution workflow instance itself instead of project ? as we require to schedule workflow, on that it depends how much data we get to run workflows instances, so it can be more than 50
As an operator, I want to optimize the number of tasks shown in the space available, so it would be easier to navigate through them. This is a suggestion from Celanese users: to maximize the number of tasks shown on screen. In this screenshot we can only see 3 tasks. It would improve the user experience if there were, for example, the task name and 2 small buttons right beside it, occupying only 1 line. It would make it easier to read the tasks, especially in checklists that have numerous tasks to be filled.
Already have an account? Login
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.