Skip to content
Snippets Groups Projects
Commit 0912c95e authored by Ilias Dimopoulos's avatar Ilias Dimopoulos
Browse files

Merge remote-tracking branch 'origin/develop' into ISAICP-7674

parents 97786ac5 5a3f3775
No related branches found
No related tags found
1 merge request!101Release v1.92.0
...@@ -136,7 +136,7 @@ NEXTCLOUD_PASS= ...@@ -136,7 +136,7 @@ NEXTCLOUD_PASS=
# Artifacts dir, used to store test artifacts. Either absolute or relative to # Artifacts dir, used to store test artifacts. Either absolute or relative to
# the webroot directory. Configure it in .env file. E.g.: # the webroot directory. Configure it in .env file. E.g.:
ARTIFACTS_DIR="${DRUPAL_FILE_TEMP_PATH:-../tmp}/artifacts" ARTIFACTS_DIR="${DRUPAL_FILE_TEMP_PATH:-../tmp}/artifacts"
# Monolog logs file paths. This is ONLY used in production and acceptance. Items # Monolog logs file paths. This is ONLY used in production and acceptance. Items
# are separated by semi-colon. The Monolog handler ID is separated by its path # are separated by semi-colon. The Monolog handler ID is separated by its path
......
...@@ -7,7 +7,7 @@ funded by the European Union via the [Interoperability Solutions for European ...@@ -7,7 +7,7 @@ funded by the European Union via the [Interoperability Solutions for European
Public Administrations (ISA)](http://ec.europa.eu/isa/) Programme. Public Administrations (ISA)](http://ec.europa.eu/isa/) Programme.
It offers several services that aim to help e-Government professionals share It offers several services that aim to help e-Government professionals share
their experience with each other. We also hope to support them to find, their experience with each other. We also hope to support them to find,
choose, re-use, develop and implement interoperability solutions. choose, re-use, develop and implement interoperability solutions.
The Joinup platform is developed as a Drupal 8 distribution, and therefore The Joinup platform is developed as a Drupal 8 distribution, and therefore
...@@ -19,6 +19,7 @@ Joinup is licensed under the ...@@ -19,6 +19,7 @@ Joinup is licensed under the
compatible with the GPL. compatible with the GPL.
## Contributing ## Contributing
See our [contributors guide](.github/CONTRIBUTING.md). See our [contributors guide](.github/CONTRIBUTING.md).
## Running your own instance of Joinup ## Running your own instance of Joinup
...@@ -35,9 +36,11 @@ To start with docker, please, check the separated [README file](docs/docker/READ ...@@ -35,9 +36,11 @@ To start with docker, please, check the separated [README file](docs/docker/READ
To run Joinup locally, below is a list of requirements and instructions. To run Joinup locally, below is a list of requirements and instructions.
#### On macOS without Docker installation #### On macOS without Docker installation
To start on macOS without Docker, please, check the separated [README file](resources/mac/README.md). To start on macOS without Docker, please, check the separated [README file](resources/mac/README.md).
#### Requirements #### Requirements
* A regular LAMP stack running PHP 7.4.0 or higher * A regular LAMP stack running PHP 7.4.0 or higher
* Virtuoso 7 (Triplestore database) * Virtuoso 7 (Triplestore database)
* Apache Solr * Apache Solr
...@@ -79,7 +82,8 @@ used tool. ...@@ -79,7 +82,8 @@ used tool.
* Install Virtuoso. For basic instructions, see [setting up * Install Virtuoso. For basic instructions, see [setting up
Virtuoso](https://github.com/ec-europa/rdf_entity/blob/8.x-1.x/README.md). Virtuoso](https://github.com/ec-europa/rdf_entity/blob/8.x-1.x/README.md).
Due to [a bug in Virtuoso 6](https://github.com/openlink/virtuoso-opensource/issues/303) it is recommended to use Virtuoso 7. Due to [a bug in Virtuoso 6](https://github.com/openlink/virtuoso-opensource/issues/303) it is recommended to use
Virtuoso 7.
During installation some RDF based taxonomies will be imported from the `resources/fixtures` folder. During installation some RDF based taxonomies will be imported from the `resources/fixtures` folder.
Make sure Virtuoso can read from this folder by adding it to the `DirsAllowed` Make sure Virtuoso can read from this folder by adding it to the `DirsAllowed`
setting in your `virtuoso.ini`. For example: setting in your `virtuoso.ini`. For example:
...@@ -154,7 +158,6 @@ $ composer install ...@@ -154,7 +158,6 @@ $ composer install
$ ./vendor/bin/run toolkit:install-clean $ ./vendor/bin/run toolkit:install-clean
``` ```
#### Run the tests #### Run the tests
Run the Behat test suite to validate your installation. Run the Behat test suite to validate your installation.
...@@ -164,7 +167,8 @@ $ cd tests ...@@ -164,7 +167,8 @@ $ cd tests
$ ./behat $ ./behat
``` ```
During development you can enable Behat test screen-shots by uncomment this line in `tests/features/bootstrap/FeatureContext.php`: During development you can enable Behat test screen-shots by uncomment this line
in `tests/features/bootstrap/FeatureContext.php`:
```php ```php
// use \Drupal\joinup\Traits\ScreenShotTrait; // use \Drupal\joinup\Traits\ScreenShotTrait;
...@@ -184,12 +188,10 @@ $ cd web ...@@ -184,12 +188,10 @@ $ cd web
$ ../vendor/bin/phpunit $ ../vendor/bin/phpunit
``` ```
### Frontend development ### Frontend development
See the [readme](web/themes/joinup/README.md) in the theme folder. See the [readme](web/themes/joinup/README.md) in the theme folder.
### Upgrade process ### Upgrade process
Joinup offers only _contiguous upgrades_. For instance, if you project is Joinup offers only _contiguous upgrades_. For instance, if you project is
...@@ -223,12 +225,15 @@ For the above example: ...@@ -223,12 +225,15 @@ For the above example:
`v2.75.x` as the fourth post update of the `mymodule` module (`02` major `v2.75.x` as the fourth post update of the `mymodule` module (`02` major
version, `075` minor version, `03` update weight within the module). version, `075` minor version, `03` update weight within the module).
### Technical details ### Technical details
* In [Rdf draft module](web/modules/custom/rdf_entity/rdf_draft/README.md) * In [Rdf draft module](web/modules/custom/rdf_entity/rdf_draft/README.md)
there is information on handling draft in CRUD operations for rdf entities. there is information on handling draft in CRUD operations for rdf entities.
* In [Joinup notification module](web/modules/custom/joinup_notification/README.md) * In [Joinup notification module](web/modules/custom/joinup_notification/README.md)
there is information on how to handle notifications in Joinup. there is information on how to handle notifications in Joinup.
* In [Joinup core module](web/modules/custom/joinup_core/README.md) there is * In [Joinup core module](web/modules/custom/joinup_core/README.md) there is
information on how to handle and create workflows. information on how to handle and create workflows.
* For some additional details in our SPARQL related functionality, see our
[SPARQL developer documentation](./docs/sparql.md).
* Check the [third party services](./docs/third-party-services.md)
documentation for additional information on the services we integrate with.
...@@ -6,6 +6,11 @@ services: ...@@ -6,6 +6,11 @@ services:
image: joinup/web:latest image: joinup/web:latest
build: build:
context: ./resources/docker/web context: ./resources/docker/web
# Default UID and GID for Linux users is 1000. If you are on Linux,
# uncomment the following lines.
# args:
# USER_ID: 1000
# GROUP_ID: 1000
# Default UID for Mac users is 501. If you are on Mac, uncomment the # Default UID for Mac users is 501. If you are on Mac, uncomment the
# following lines. # following lines.
# args: # args:
......
...@@ -68,12 +68,22 @@ that the host user will have to match the ID and GID of the DAEMON_USER and ...@@ -68,12 +68,22 @@ that the host user will have to match the ID and GID of the DAEMON_USER and
DAEMON_GROUP that runs the web server. This is needed in order to be able to DAEMON_GROUP that runs the web server. This is needed in order to be able to
write to the files and directories that are mounted in the container. write to the files and directories that are mounted in the container.
By default, `www-data` user is set with ID `1000` and `www-data` group is set By default, `www-data` user is set with ID `33` and `www-data` group is set with
with GID `1000`. If you are using a Linux machine, and you are using the default GID `33`. If you are using a Linux machine, and you are using the default user,
user, you can keep the default settings. If you are using a Mac, the default you should use the following settings:
user ID and group ID are `501` and `20` respectively. If you are using a Mac,
you will need to uncomment the corresponding lines in the ```yaml
`docker-compose.override.yml` web:
image: joinup/web:latest
build:
context: ./resources/docker/web
args:
USER_ID: 1000
GROUP_ID: 1000
```
If you are using a Mac, the default user ID and group ID are `501` and `20`
respectively. In that case, you need to use the following settings:
```yaml ```yaml
web: web:
......
Release procedure Release procedure
================= =================
1. Create a release branch based on the current master, merge the latest develop We have 2 repositories that Joinup uses: the
into it, and push it to GitHub. [development repository](https://git.fpfis.tech.ec.europa.eu/digit/digit-joinup-dev),
which is where the development takes place, and the
[reference repository](https://git.fpfis.tech.ec.europa.eu/ec-europa/digit-joinup-reference),
which is where the releases are published.
The two main branches are the `master` branch on the reference repository and
the `develop` branch on the development repository. The master branch is the
reference for the production environment.
For this guide, we will assume that you have cloned the development repository,
and that you have a remote named `origin` that points to the development
repository, and a remote named `reference` that points to the reference
repository, but you can adopt according to your setup. After cloning the
development repository, you can add the reference repository as a remote by
running the following command:
```
$ git remote add reference https://git.fpfis.tech.ec.europa.eu/ec-europa/digit-joinup-reference.git
```
The release procedure is as follows:
1. Create a release ticket in Jira. The release should be named after the
release number. The release ticket is a placeholder for the tasks below.
2. Fetch the latest changes from the reference repository and checkout the
`master` branch. Create a new release branch from the `master` branch. Name
the branch `release-x.y.z`, where `x.y.z` is the version number of the
release. Fetch the latest changes from the development repository and merge
origin's `develop` branch into the release branch. Push the new branch to the
reference repository.
The commands for the above are the following:
``` ```
$ git fetch $ git fetch reference
$ git checkout master $ git checkout reference/master
$ git reset --hard origin/master $ git checkout -b release-x.y.z
$ git checkout -b release-1.23.4 $ git fetch origin
$ git merge origin/develop $ git merge origin/develop
$ git push origin release-1.23.4 -u $ git push reference release-x.y.z
``` ```
1. Check all resolved tickets from the previous sprint(s) and verify that they 3. Create a merge request from the release branch to the `master` branch. When
the tests pass, merge the release branch into the `master` branch. This will
automatically run the acceptance environment build. The release ticket should
be moved to 'UAT' so that the functional team can test the release.
4. If any last minute problems are discovered during acceptance testing, these
will be fixed in merge requests that are merged directly into the release
branch. Of course, the normal procedure to create Jira tickets for these
issues and move them in QA should be followed.
5. Check all resolved tickets from the previous sprint(s) and verify that they
have a version set in the "Fix Version" field. Some tickets might be resolved have a version set in the "Fix Version" field. Some tickets might be resolved
but are not part of any Joinup release (for example: analysis/investigations, but are not part of any Joinup release (for example: analysis/investigations,
work done upstream, infrastructure work etc.). These should get "NOVERSION". work done upstream, infrastructure work etc.). These should get "NOVERSION".
The tickets can be listed by browsing all tickets in Jira and using the query The tickets can be listed by browsing all tickets in Jira and using the query
`project = ISAICP AND Sprint = "Joinup sprint N" AND status = Resolved`. `project = ISAICP AND Sprint = "Joinup sprint N" AND status = Resolved`.
1. List all tickets in Jira that have the "Fix Version" field set to the 6. List all tickets in Jira that have the "Fix Version" field set to the
upcoming release, using the query `project = ISAICP AND fixVersion = x.y.z`. upcoming release, using the query `project = ISAICP AND fixVersion = x.y.z`.
These tickets will be used to create the changelog in the next step. These tickets will be used to create the changelog in the next step.
1. Create [a new draft 7. Create a markdown changelog from the list of tickets. The changelog should
release](https://github.com/ec-europa/joinup-dev/releases/new). Enter the tag contain the following sections:
version in the format `v1.23.4` and set the target branch to the new release - New features
branch that was created in step 1 above. - Improvements
1. Enter the changelog in the release description field, with headings for `New - Bug fixes
features`, `Improvements`, `Bug fixes` and `Security`. Make sure to choose - Security
the option *Save draft*!
1. Build the release using the following Jenkins job: Do not create the changelog in a release, use the release Jira ticket
https://jenkins.fpfis.eu/job/Joinup/job/acceptance/job/build-rpm-acc/ instead.
Make sure to enter the name of the release branch as the `RELEASE_TAG`. We 8. When the release is approved in UAT, and the release ticket is moved to
are actually only creating the tag _after_ the release is accepted. 'Accepted', create a tag based on the `master` branch. The tag should be
1. Deploy the release to the acceptance environment using the Jenkins job: created in the reference repository. The tag should be named `vx.y.z`, where
https://jenkins.fpfis.eu/job/Joinup/job/acceptance/job/Build-acceptance/ `x.y.z` is the version number of the release. For example, the tag for the
1. Move the release ticket in UAT. 1.0.0 release should be named `v1.0.0`. The description for the tag should
1. If any last minute problems are discovered during acceptance testing, these be the changelog that was created in the previous step. **The creation of
will be fixed in pull requests that are merged directly into the release the tag will trigger the production environment build.**
branch.
1. After receiving approval from the functional team, merge the release branch Post release procedure
into master, then publish the new release on GitHub. This will automatically ======================
create the tag. The release branch can then be deleted.
1. In the Joinup project page in Jira, click the "Releases" icon in the left After the release is done, the following steps should be taken:
sidebar. Find the release in the table, click the three-dot 'Actions' menu
and choose 'Release'. Set the correct date for the release. 1. Move the release ticket to 'In Progress'.
1. Merge back master into develop. 2. Create a new branch based on the `master` branch from the reference
1. Create a ticket on the Jira board of the devops (project FPFIS SUPP) with repository named `release-x.y.z-post`. This branch will be used to clean up
the labels "deploy, fpfis-classified, joinup, release, upgrade" and a link to the updates that were made during the release procedure.
the release tag on Github. Link this ticket with the release ticket on the 3. Clean any instances of the `hook_update_N`, `hook_post_update_NAME` and
Joinup board. `hook_deploy_NAME` functions that were added during the release procedure.
1. Communicate this ticket with the devops team and decide on a timeslot for the 4. Push the branch to the development repository and create a merge request
release. from the branch to the `develop` branch. Move the release ticket to the QA
1. Communicate the release time with the functional team so they can announce column and allow it to be picked up by the team. When the ticket is approved,
this on the Joinup homepage. the team member should merge the merge request and move the release ticket to
1. Create a followup PR against develop that cleans up the update and deploy the 'Resolved' column.
scripts, and a corresponding Jira ticket, and move this in QA.
1. After the PR is merged and the release is deployed, move the release ticket Hotfix procedure
from 'Accepted' to 'Resolved'. ================
Hotfixes are the patch version of the project. For example, if the current
version is `1.0.0`, the next hotfix will be `1.0.1`. If another hotfix is needed
before the next release, the version will be `1.0.2`. The numbering of the
update functions should be named after the next release. For example, if version
`1.21.0` is released, consider that all updates named with `1021##` are already
deployed and deleted. Thus, even though the hotfix release is `1.21.1`, the
update functions should be named `1022##` and **WILL NOT** be deleted until the
next minor release (version `1.22.0`).
The hotfix procedure is as follows:
1. Create a hotfix ticket in Jira. The hotfix should be named after the hotfix
number. The hotfix ticket is a placeholder for the tasks below.
2. Create a new branch from the `master` branch. Name the branch `release-x.y.z`,
where `x.y.z` is the version number of the hotfix. Push the new branch to the
reference repository.
3. Any ticket needed for the hotfix will checkout the hotfix release branch and
will have a merge request against the hotfix release branch. When the tests
pass, the merge request will be merged into the hotfix release branch.
4. Repeat the normal release procedure from step 5. The hotfix ticket should be
moved to 'UAT' so that the functional team can test the hotfix.
**Important**: The hotfix does **not** have a post release procedure. The reason
for this is that the hotfix uses numbering for the update functions that is
based on the next release. This means that the update functions will be deleted
when the next release is done. It further means that the updates should not be
removed to avoid conflicts in namings.
What is SPARQL?
---------------
SPARQL is a query language for RDF. It is used to query data stored in a
triplestore. The triplestore used in this project is
[Virtuoso](http://virtuoso.openlinksw.com/).
SPARQL is a standardised language, and is used by many triplestores.
[This page](https://www.w3.org/TR/sparql11-query/) gives a good overview of
the language.
The SPARQL endpoint for this project is at `http://web:8890/sparql`.
SPARQL Queries
--------------
Similar to MySQL, SPARQL queries are written in a SQL-like syntax. The
following is a simple example of a SPARQL query:
SELECT * WHERE {
?s ?p ?o .
}
The above query will return all the data in the triplestore.
The `SELECT` keyword is used to specify the type of query. In this case, we
are using `SELECT *` to return all the data in the triplestore.
The `WHERE` keyword is used to specify the conditions of the query. In this
case, we are using `?s ?p ?o .` to return all the data in the triplestore.
The `?s`, `?p` and `?o` are variables. They can be used to specify the
subject, predicate and object of the query. `?s` is short for `?subject`,
`?p` is short for `?predicate` and `?o` is short for `?object`. A triple is
made up of a subject, predicate and object - this is the basic unit of data
in RDF. The triple represents a statement about the subject. For example, the
triple `?s ?p ?o` could be used to represent the statement "The subject is
the object". For our project, it is easier to consider the ?s as the entity ID,
the ?p as the field URI and the ?o as the value. The field URI is the mapping
of a field name to a URI. For example, the field URI for the field "Title" is
`http://purl.org/dc/terms/title`. The field URI is used to identify the field
in the triplestore.
The `.` is used to separate the conditions of the query.
Sample queries
--------------
Below are some sample queries that can be run against the triplestore.
### Get all the data
SELECT * WHERE {
?s ?p ?o .
}
### Get all the data for a specific subject
SELECT * WHERE {
<http://example.com/subject> ?p ?o .
}
You can find the URI ID of the entities in Joinup by visitin the menu
"Metadata > Export".
### Get all predicates
SELECT DISTINCT ?p WHERE {
?s ?p ?o .
}
### Get all the data for a specific predicate
SELECT * WHERE {
?s <http://example.com/predicate> ?o .
}
### Get all available graphs
SELECT DISTINCT ?g WHERE {
GRAPH ?g {
?s ?p ?o .
}
}
### Get all the data for a specific graph
SELECT * WHERE {
GRAPH <http://example.com/graph> {
?s ?p ?o .
}
}
### Get all the data for a specific subject and predicate
SELECT * WHERE {
<http://example.com/subject> <http://example.com/predicate> ?o .
}
### Get all the data for a specific subject and predicate in a specific graph
SELECT * WHERE {
GRAPH <http://example.com/graph> {
<http://example.com/subject> <http://example.com/predicate> ?o .
}
### Get entities with a specific title
SELECT DISTINCT ?s WHERE {
?s <http://purl.org/dc/terms/title> ?o .
FILTER regex(?o, "My title", "i")
}
### Get entities with a specific title and a specific field
SELECT DISTINCT ?s WHERE {
?s <http://purl.org/dc/terms/title> ?o .
FILTER regex(?o, "My title", "i")
?s <http://example.com/predicate> ?o .
}
### Get entities that reference a specific entity through some field mapping
SELECT DISTINCT ?s WHERE {
?s <http://example.com/predicate> <http://example.com/subject> .
}
SPARQL Mappings
---------------
In Joinup, we use `sparql_entity_storage` to map the RDF data to Drupal
entities. The way this is done is through the SPARQL mapping config entities and
the field third party settings.
And example for the SPARQL mapping entity files is the collection mapping. The
collection mapping is defined in the file
`config/install/sparql_entity_storage.mapping.rdf_entity.collection.yml`.
The collection mapping defines the following - might be outdated but still
a valid example:
```yaml
third_party_settings:
rdf_schema_field_validation:
property_predicates:
- 'http://www.w3.org/2000/01/rdf-schema#domain'
graph: 'http://adms-definition'
class: 'http://www.w3.org/2000/01/rdf-schema#Class'
```
A list of properties that are used to describe the class of the entity. This is
used for the field validation.
```yaml
rdf_type: 'http://www.w3.org/ns/dcat#Catalog'
```
The RDF type of the entity.
This is the URI mapped to the bundle of the entity. You can use it for example
to find the entities of a specific bundle. For example, for collections, the
following query will return all the collections:
```sparql
SELECT * WHERE {
?s <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/ns/dcat#Catalog> .
}
```
```yaml
graph:
default: 'http://joinup.eu/collection/published'
draft: 'http://joinup.eu/collection/draft'
```
The graph where the data is stored. The `default` graph is the graph where the
published data is stored. The `draft` graph is the graph where the draft data
is stored.
The graphs are mainly groupings of triples. A good way to understand this is to
think of the graphs as the tables in a relational database. The triples are the
rows in the tables. The graphs are used to separate the data. In Joinup, we use
graphs to distinguish between bundles and between published and draft data.
For example, in order to retrieve all collection URI IDs that are published, the
following query can be used:
```sparql
SELECT * WHERE {
GRAPH <http://joinup.eu/collection/published> {
?s <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/ns/dcat#Catalog> .
}
}
```
```yaml
base_fields_mapping:
rid:
target_id:
predicate: 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type'
format: resource
uid:
target_id:
predicate: 'http://joinup.eu/owner/uid'
format: 'xsd:integer'
label:
value:
predicate: 'http://purl.org/dc/terms/title'
format: t_literal
created:
value:
predicate: 'http://purl.org/dc/terms/issued'
format: 'xsd:dateTime'
changed:
value:
predicate: 'http://purl.org/dc/terms/modified'
format: 'xsd:dateTime'
uuid:
value:
predicate: ''
format: ''
langcode:
value:
predicate: 'http://joinup.eu/language'
format: t_literal
default_langcode:
value:
predicate: 'http://joinup.eu/language/default'
format: literal
content_translation_source:
value:
predicate: 'http://joinup.eu/language/translation_source'
format: t_literal
content_translation_outdated:
value:
predicate: 'http://joinup.eu/language/translation_outdated'
format: t_literal
content_translation_uid:
target_id:
predicate: 'http://joinup.eu/language/translation_author'
format: t_literal
content_translation_status:
value:
predicate: 'http://joinup.eu/language/translation_status'
format: t_literal
content_translation_created:
value:
predicate: 'http://joinup.eu/language/translation_created_time'
format: t_literal
content_translation_changed:
value:
predicate: 'http://joinup.eu/language/translation_changed_time'
format: t_literal
graph:
value:
predicate: ''
format: ''
```
These are the base fields of the entity. The base fields are the fields that
are defined in the `RdfEntity` class. As you can see, the `rid` field is mapped
to the `http://www.w3.org/1999/02/22-rdf-syntax-ns#type` predicate and all the
other fields are mapped to the predicates that are defined in the Joinup.
Properties without a mapping value will not be stored in the database. For
example, the uuid field will not be stored to the database according to the
above YAML file. This is the source of the mappings sthat can be used as
predicates in your queries for the database. For example, if you want to find
collections, with the title set to 'My title', you will need the mappings for
the `rid` and the `label` fields. The `rid` field is mapped to the
`http://www.w3.org/1999/02/22-rdf-syntax-ns#type` predicate and the `label`
field is mapped to the `http://purl.org/dc/terms/title` predicate. So, the
following query will return all the collections with the title 'My title':
```sparql
SELECT * WHERE {
GRAPH <http://joinup.eu/collection/published> {
?s <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/ns/dcat#Catalog> .
?s <http://purl.org/dc/terms/title> "My title" .
}
}
```
```yaml
entity_id_plugin: joinup_po_namespace
```
The plugin that is used to generate the entity ID. The entity ID is the URI of
the entity. `joinup_po_namespace` is the plugin that is used to generate the
entity ID in Joinup.
# The Url used here is no longer dereferenceable, however, it is still in use in the dataset.
SELECT ?bundles
WHERE {
?bundles rdfs:isDefinedBy <http://joinup.ec.europa.eu/asset/adms_foss/release/release100>.
}
\ No newline at end of file
SELECT DISTINCT ?prop
WHERE {
?entity rdf:type <http://purl.org/adms/sw/SoftwarePackage>.
?entity ?prop ?value
}
\ No newline at end of file
SELECT ?property ?type
WHERE {
<http://www.w3.org/ns/adms#AssetRepository> ?property ?obj.
?property <http://www.w3.org/2000/01/rdf-schema#range> ?type
}
GROUP BY (?property)
\ No newline at end of file
SELECT ?prop ?value
WHERE{
<https://adullact.net/frs/download.php/file/2506/acogit-client-gwt.war#package> ?prop ?value
}
ORDER BY DESC (?prop)
SELECT ?sub
WHERE {
?sub ?pred rdfs:Class
}
GROUP BY (?sub)
ORDER BY DESC (?sub)
Third party services
====================
This application uses the following third party services:
- [Webtools analytics](#webtools-analytics)
Webtools analytics
------------------
The Webtools analytics service is used to collect anonymous usage statistics
about the application. The data is used to improve the application and to
monitor its performance.
The service is based to the [MATOMO](https://matomo.org/) open source analytics
platform. The documentation for the webtools analytics service can be found
[here](https://webgate.ec.europa.eu/fpfis/wikis/display/webtools/Europa+Analytics).
The pproduction environment is at the endpoint
[https://webanalytics.europa.eu/](https://webanalytics.europa.eu/). The
development environment is at the endpoint
[https://webanalytics.acc.fpfis.tech.ec.europa.eu/](https://webanalytics.acc.fpfis.tech.ec.europa.eu/).
In order to use the instance, the following environment variables must be set:
```dotenv
OE_WEBTOOLS_ANALYTICS_SITE_ID=
OE_WEBTOOLS_ANALYTICS_SITE_PATH=
OE_WEBTOOLS_ANALYTICS_SITE_INSTANCE=
```
The site ID is provided by the devops. The site path is the path of the site
where the analytics are collected. The site instance is the instance of the
site where the analytics are collected.
Additionally, the URL of your local instance must match the URL accepted by the
service. The development service, accepts the following URLs:
* digit-joinup.acc.fpfis.tech.ec.europa.eu
So, in order to use the development service, you must set the following in your
hosts file:
```hosts
<your local IP - usually 127.0.0.1> digit-joinup.acc.fpfis.tech.ec.europa.eu
```
For the site ID, please ask your team to provide you the hash. The site path is
the URL above and the site instance is `testing`.
...@@ -62,9 +62,12 @@ phpcs: ...@@ -62,9 +62,12 @@ phpcs:
# number of available processor cores. If the system has fast storage then # number of available processor cores. If the system has fast storage then
# this can be increased further. # this can be increased further.
parallel: 1 parallel: 1
# Set this config to `false` in your `runner.yml` in order to disable running # Set this config to `true` in your `runner.yml` in order to enable running
# PHP coding standards on Git push. # PHP coding standards on Git push. Due to the complexity of the docker
run_on_push: true # environment, having git run from the host but having the scripts that need
# to be run from the container, it is very hard to catch all cases. For this
# reason, this is disabled by default.
run_on_push: false
commands: commands:
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment