Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
Hello,I am trying to follow along the first chapter in Data Engineer Basics - Integrate. I am trying to follow the first workbook : Authentication. When I try to install poetry, I get an error below.Can anyone let me know how I may troubleshoot? I am on mac, python 12 and pipx (instead of pip) at ~/Library/Application Support/pypoetry/venv/lib/python3.12/site-packages/poetry/installation/chef.py:164 in _prepare 160│ 161│ error = ChefBuildError("\n\n".join(message_parts)) 162│ 163│ if error is not None: → 164│ raise error from None 165│ 166│ return path 167│ 168│ def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:Note: This error originates from the build backend, and is likely not a problem with poetry but with pyzmq (25.0.2) not supporting PEP 517 builds. You can verify this by running 'pip wheel --no-cache-dir --use-pep517 "pyzmq (==25.0.2)"'.Th
Hi, Unable to connect CDF data source to Grafana.Please suggest asap on this.Thanks
Im trying to create a new function in python, when I run this function from local it works fine but when I run it from cdf service it does not work. I got this error message: CogniteAPIError: <!doctype html><html lang=en><title>500 Internal Server Error</title><h1>Internal Server Error</h1><p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p> | code: 500 | X-Request-ID: 04b27193-d865-98d5-8a2a-11b94126e8eb
I am creating a transformation where I am joining data from two different views/containers, where table B has a node reference to table A. I have tried to find documentation for this, but I have not found any so far. Through friends and trial and error I have found two options, and neither seems to be performing well. Example:Type A { Name: String}Type B{ Name: String A_ref: A}What I have found as possible solutions are to go through cdf_data_models and picking the externalId from the node-reference:from cdf_data models(<spc>, <mod>,<ver>, “A”) as A join cdf_… as B on A.external_id = B.A_ref.externalIdSame as above, but join on a as a nodereference: on B.A_ref = node_reference(<spc>, A) Neither of these options seems to be documented anywhere, and I can’t find any other ways documented. The performance when reading when using these joins seems slow, even though I have set up a few indexes which should cover the different joins. This is also slower then
What are the permissible size of file can upload in CDF ? Which type of P&ID files are directly read in CDF ? Which type of 3D Model files are directly read in CDF ? Is any converter tool is available in CDF for conversion of file in to require file format for P&ID and 3D model in cognite data fusion ?
Is there a way to query metadata of TimeSeries from a particular dataset in CDF Transformations?Something like this, but this query isn’t workingselect cast(`externalId` as STRING) as externalId, cast(`dataSetId` as STRING) as eidfrom `_cdf`.`timeseries`where dataSetId == <data_set_id>;
Hello everyone. We have some checklists where the operators measure the length of something, usually in feet and inches, like 1’10”. We have been facing some trouble creating tasks with this type of data on the templates, we could only add decimal numbers. How have you been handling this type of data?Thank you.
Hi Team,I am doing the Notebook 2 in Data Engineer Basics - Integrate course and getting this error, am I missing something here?
When installing pygen in a CDF notebook you may be met with ValueError: Requested 'typing-extensions>=4.10.0; python_version < "3.13"', but typing-extensions==4.7.1 is already installedThis is currently a known bug, which we are working on solving. For now, the workaround is to manually uninstall `typing-extensions` using micropip. The code to do so is documented in the installation of pygen along with other known issues and solutions.
Hello!I tried to setup a simple cogex extractor, but get an error that is not letting me proceed following the course. Please help.SyntaxWarning: invalid escape sequence '\e' """csv_extractor_2/__main__.py:13: error: Argument "run_handle" to "Extractor" hasincompatible type"Callable[[CogniteClient, AbstractStateStore, Config, Event], None]"; expected"Callable[[CogniteClient, AbstractStateStore, Config, CancellationToken], None] | None" [arg-type] run_handle=run_extractor, ^~~~~~~~~~~~~Found 1 error in 1 file (checked 4 source files)
Doing the ‘hands-on’ for Data Engineer Basics - “Learn to Use the Cognite Python SDK “ Course."I am down to adding events to CDF from events.csv.I noted that the dataframe df has some entries :NaN” wherever the csv has a blank.Then,However, creating events in CDF fails:the error is: Next, I replace all the csv blank entries with text “blank”, and read into a dataframe. Now the latter does not have any “NaN” values. This throws the same exception when creating events in CDF.If someone can kindly assist, I would be very grateful.Thank you,Doug
Hi, I’m doing the Learn to Use the Cognite Python SDK module in the Data Scientist basics seems incorrect.For example, I’m in the notebook `2_List_Search_Retrieve.ipynb` and the data presented appears to be very random, and not industrial data. When I run a query like:`c.assets.list(metadata={'ELC_STATUS_ID': '1211'},limit=5)` this comes back empty because there is no industrial data in any of the the datasets. In fact, almost all of the datsets in the c.data_sets.list() command are Deprecated and archived.It looks to me like potentially the datasets are tests that people have made. I would really like to be able to see some more realistic data in the dataset used for exploring the tool. Please let me know how I can do that, if anyone knows!
Hi, I’m doing the data science modules, I’m on Notebook 2 (List/Search/Retrieve) and I note that when I use the LabelFilter on the assets, I find a number of assets with a label called ‘COLD’. However, when I do c.labels.list (limit=None), I get only 4 labels and none of them match the ‘COLD’ label. Are labels for assets different from labels that are found using c.labels.list?
Notify an instructor and ask them to give required consent to the application in the AAD (requires admin privileges): Share the link of your Grafana instance (https://<NAME OF DOMAIN>.grafana.net/) with the instructor The instructor will navigate to the link and click the box next to "Consent on behalf of your organization" and "Accept" Once the required permissions are granted sign in to https://<NAME OF DOMAIN>.grafana.net/ in a new session (e.g. incognito mode) and choose Sign in with MicrosoftPlease assign me required privileges.https://parthsinha.grafana.net/
(Please forgive the erroneous posting of this under Academy Discussions)Doing the ‘hands-on’ for Data Engineer Basics - “Learn to Use the Cognite Python SDK “ Course."I have done the following:Make a dataframe from reading from a csv file: One of the columns in the csv file is ‘region’. The 5 different values in the region column are When the above code is run, the following appears on the screen: Q1. Does this mean pandas has interpreted the region “None” as “nan”? If so that would explain why the following code throws “ValueError: Out of range float values are not JSON compliant. Make sure your data does not contain NaN(s) or +/- Inf!” I would value your guidance as I have not worked with pandas() much at all.Thank you,Doug pandas
As a user I don’t want to find outdated assets (facilities and wells) inside the PSN Viewer. As one of the traits of the Versioned Asset Hierarchy, CDF assets disappearing from the source will not be removed from CDF but will lose their relationships. Therefore, CDF assets to be displayed in the PSN Viewer must have at least one active relationship. At the moment, the implementation of this rule is missing. The PSN Viewer in Cognite Maintain shall feature only wells and facilities having at least one relationship to a target from the Production System Network. CDF assets from other Hierarchy sources (i.e., I&M Hierarchy) must be removed from the PSN Viewer’s search functionality.
When I look at event data, I mostly see*** ( i think this is like NULL or NaN - not filled)see some worktask, rule_broken. Condition, etc.I am new to open industrial data. Thinking of using it for a Thesis, etc. Is there a way to get the different event “types” - i tried scrolling, api is not working well for me. Any advice on this front would be greatly appreciated. Thank you very much.
Hi,I’m using the Charts function “Statistics/Outlier Removal” and noticed that the option “Remove zeros” (aka del_zero_val) does the exact opposite of what one would expect. Example of a data stream filtered without “Removes zeros” Example of a data stream filtered with “Removes zeros”Is this a bug or the intended behaviour? If the latter then I think the naming/documentation is misleading.
I am just playing with the analysis of open industrial data (mostly time series IoT sensor data). It is my understanding that encoders play a role in detecting anomalies in the data. Can decoder only models such as Lag-Llama, TimesFM, Moirai detect anomalies or they can only predict? thank you.
When displaying many similar time series together in Charts, it is useful to select “Merge unit” to reduce the number of Y axes. This functionality does not follow the chart when inserted in a canvas. Have I missed something here?
Functions under dynamic task give some output, that needs to passed to next function in workflowOutput of wk-create-2 under dynamic is{“count”:1} How can we send this output to next called function or collective output of all dynamic. I tried following, it gave serialization error. What is corect way to implement if possble.?[ { "externalId": "wk-get-ext-1", "type": "function", "name": "Dynamic Task Generator", "description": "Returns a list of workflow tasks", "parameters": { "function": { "externalId": "wk-get-ext-1", "data": "${workflow.input}" } }, "retries": 1, "timeout": 3600, "onFailure": "abortWorkflow", "dependsOn": [] }, { "externalId": "wk-create-2", "type": "dynamic", "name": "Dynamic Task", "description": "Executes a list of workflow tasks", "parameters": { "dynamic": { "tasks": "${wk-get-ext-1.output.response.tasks}" } }, "retries": 1, "timeout": 3600, "onFailure": "abortW
Suppose I want to perform below activities in the workflowFunction 1 → Dynamic Task 1 (Set of Tasks) → Function 2Function 2 will only get started after all Dynamic tasks are completed , Is there any way through which Function 2 can run in parallel based on different output coming from each dynamic task? Even if it is possible to achieve this without dynamic tasks in workflow, please let me know those ways.The output coming from all dynamic tasks are not dependent so that’s why it should be fine to run the Function 2 independently for each output. Also, please provide the example of workflow definition if I want to use the output of dynamic task in Function 2. For example, if I want to use output of dynamic task “step-2-dynamic-task-get-records” in “step-3-print-records”. This is the workflow definition I am using. { "items":[ { "workflowExternalId": "nk-cdf-workflow2", "version": "5", "workflowDefinition": { "description": "", "tasks": [ {
I want to complete Data Engineer Basics - Integrate so, How to purchase 2 credits for Data Engineer Basics - Integrate?
Hello everyone.I was testing the offline mode on Infield and got an odd behavior using an iPad, with Safari: when I get offline, I am not able to select any field to insert values nor click any buttons. It is only possible when I am back online. Is this how it is supposed to work?I have a video but could not attach here,Thank you in advance.
Hello Everyone, I’m new with CDF I would like to know how can we take the best of CDF to implement Machine Learning or Artificial Intelligence solutions in my company?
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.