GithubHelp home page GithubHelp logo

datakaveri / iudx-resource-server Goto Github PK

View Code? Open in Web Editor NEW
6.0 3.0 26.0 6.58 MB

IUDX Data publication and subscription portal

Home Page: https://rs.iudx.org.in/apis/

License: Apache License 2.0

Java 98.30% HTML 0.18% Python 1.01% Dockerfile 0.13% Shell 0.04% PLpgSQL 0.35%
smart-cities iot data-exchange

iudx-resource-server's Introduction

Build Status Jenkins Coverage Unit Tests Performance Tests Security Tests Integration Tests

IUDX

iudx-resource-server

The resource server is IUDXs data discovery, data publication and data subscription portal. It allows data providers to publish their data resources in accordance to the IUDX vocabulary annotated meta-data document, data subscribers to query and subscribe for data resources as per the consent of the provider. The consumers can access data from the resource server using HTTPs and AMQPs.

Features

  • Provides data access from available resources using standard APIs, streaming subscriptions (AMQP) and/or callbacks
  • Search and count APIs for searching through available data: Support for Spatial (Circle, Polygon, Bbox, Linestring), Temporal (Before, during, After) and Attribute searches
  • Adaptor registration endpoints and streaming endpoints for data ingestion
  • Integration with authorization server (token introspection) to serve private data as per the access control policies set by the provider
  • End to End encryption supported using certificate
  • Secure data access over TLS
  • Scalable, service mesh architecture based implementation using open source components: Vert.X API framework, Elasticsearch/Logstash for database and RabbitMQ for data broker.
  • Hazelcast and Zookeeper based cluster management and service discovery
  • Integration with auditing server for metering purpose

API Docs

The api docs can be found here.

Prerequisites

External Dependencies Installation

The Resource Server connects with various external dependencies namely

  • ELK stack
  • PostgreSQL
  • RabbitMQ
  • Redis
  • AWS S3

Find the installations of the above along with the configurations to modify the database url, port and associated credentials in the appropriate sections here

Get Started

Docker based

  1. Install docker and docker-compose
  2. Clone this repo
  3. Build the images ./docker/build.sh
  4. Modify the docker-compose.yml file to map the config file you just created
  5. Start the server in production (prod) or development (dev) mode using docker-compose docker-compose up prod

Maven based

  1. Install java 11 and maven
  2. Use the maven exec plugin based starter to start the server mvn clean compile exec:java@resource-server

JAR based

  1. Install java 11 and maven
  2. Set Environment variables
export RS_URL=https://<rs-domain-name>
export LOG_LEVEL=INFO
  1. Use maven to package the application as a JAR mvn clean package -Dmaven.test.skip=true
  2. 2 JAR files would be generated in the target/ directory
    • iudx.resource.server-cluster-0.0.1-SNAPSHOT-fat.jar - clustered vert.x containing micrometer metrics
    • iudx.resource.server-dev-0.0.1-SNAPSHOT-fat.jar - non-clustered vert.x and does not contain micrometer metrics

Running the clustered JAR

Note: The clustered JAR requires Zookeeper to be installed. Refer here to learn more about how to set up Zookeeper. Additionally, the zookeepers key in the config being used needs to be updated with the IP address/domain of the system running Zookeeper.

The JAR requires 3 runtime arguments when running:

  • --config/-c : path to the config file
  • --hostname/-i : the hostname for clustering
  • --modules/-m : comma separated list of module names to deploy

e.g. java -jar target/iudx.resource.server-cluster-0.0.1-SNAPSHOT-fat.jar --host $(hostname) -c configs/config.json -m iudx.resource.server.database.archives.DatabaseVerticle,iudx.resource.server.authenticator.AuthenticationVerticle ,iudx.resource.server.metering.MeteringVerticle,iudx.resource.server.database.postgres.PostgresVerticle

Use the --help/-h argument for more information. You may additionally append an RS_JAVA_OPTS environment variable containing any Java options to pass to the application.

e.g.

$ export RS_JAVA_OPTS="-Xmx4096m"
$ java $RS_JAVA_OPTS -jar target/iudx.resource.server-cluster-0.0.1-SNAPSHOT-fat.jar ...

Running the non-clustered JAR

The JAR requires 1 runtime argument when running:

  • --config/-c : path to the config file

e.g. java -Dvertx.logger-delegate-factory-class-name=io.vertx.core.logging.Log4j2LogDelegateFactory -jar target/iudx.resource.server-dev-0.0.1-SNAPSHOT-fat.jar -c configs/config.json

Use the --help/-h argument for more information. You may additionally append an RS_JAVA_OPTS environment variable containing any Java options to pass to the application.

e.g.

$ export RS_JAVA_OPTS="-Xmx1024m"
$ java $RS_JAVA_OPTS -jar target/iudx.resource.server-dev-0.0.1-SNAPSHOT-fat.jar ...

Testing

Unit tests

  1. Run the tests using mvn clean test checkstyle:checkstyle pmd:pmd
  2. Reports are stored in ./target/

Integration tests

Integration tests are through Rest Assured

  1. Run the server through either docker, maven or redeployer
  2. Run the integration tests mvn test-compile failsafe:integration-test -DskipUnitTests=true -DintTestHost=local host -DintTestPort=8080
  3. Reports are stored in ./target/

Encryption

All the count and search APIs have a feature to get encrypted data. To get the data in encrypted format, the user could provide a publicKey in the header, with the value that is generated from lazySodium sealed box. The header value should be in url-safe base64 format. The encrypted data could be decrypted using the lazysodium sealed box by supplying the private and public key.

Contributing

We follow Git Merge based workflow

  1. Fork this repo
  2. Create a new feature branch in your fork. Multiple features must have a hyphen separated name, or refer to a milestone name as mentioned in Github -> Projects
  3. Commit to your fork and raise a Pull Request with upstream

License

View License

iudx-resource-server's People

Contributors

abhi4578 avatar admin-datakaveri avatar ananjaykumar2 avatar ankita-trigyn avatar ankitmashu avatar code-akki avatar dependabot[bot] avatar divyasreemunagavalasa avatar gopal-mahajan avatar kailash avatar kailash-adhikari avatar karun-singh avatar kranthi-guribilli avatar mahimatics avatar mdadil-dk avatar pranavrd avatar pranavv0 avatar rajeevkush avatar rraks avatar shreelakshmijoshi avatar skdwriting avatar sushanthakumar avatar swaminathanvasanth avatar tanvi029 avatar tharak-ram1 avatar umeshpacholitrigyn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

iudx-resource-server's Issues

1.1.3. Refresh Subscription token API

This API shall allow IUDX Consumers to continue the streaming subscription

  • It is assumed that the information about the queues, bindings, last refreshed time is available with the rs.
  • When this API is called, the last refreshed time should be updated.

1.1.2. Admin API for updating token invalidation

For details about the API refer to the SRS document (section 1.1.2)

The API should perform the following:

  • Create the information in PostgresQL
  • Broadcast this information over internal RabbitMQ

[Bug] Issue with default time filter.

If no temporal parameters are passed to the /entities endpoint, the application tries to enforce a default n days(10 or 30) time filter. Currently, this is not working as expected and returning a 400 response for the correct query.

Issue from log

{
   "error":{
      "root_cause":[
         {
            "type":"parse_exception",
            "reason":"unit [n] not supported for date math [-nowd/d]"
         }
      ],
      "type":"search_phase_execution_exception",
      "reason":"all shards failed",
      "phase":"query",
      "grouped":true,
      "failed_shards":[
         {
            "shard":0,
            "index":"<<index_id>>",
            "node":"",
            "reason":{
               "type":"parse_exception",
               "reason":"unit [n] not supported for date math [-nowd/d]"
            }
         }
      ]
   },
   "status":400
}


1.10. Enhanced access policy decisions

Based on the JWT info related to the total number of API calls allowed per day for the resource, allow or deny access to the user based on immuDB data

Issues related to metering

  1. Not able to get consumer audit details for open resource with open token:
    curl --location --request GET 'https://rs.iudx.org.in/ngsi-ld/v1/consumer/audit?id=datakaveri.org/04a15c9960ffda227e9546f3f46e629e1fe4132b/rs.iudx.org.in/pune-env-aqm/184ba502-22a8-ad15-a8f1-c966cd3aa7a7&timerel=during&time=2022-03-10T14:20:00Z&endtime=2022-03-16T14:20:00Z' --header 'token: <open-resource-token>' --header 'options: count'
    curl response:
    {"type":"urn:dx:rs:invalidAuthorizationToken","title":"Not Authorized","detail":"Not Authorized"}

The failure logs at the resource server :

    2022-05-02 11:52:23	
time=2022-05-02T06:22:23+0000 level="�[1;31mERROR�[m" loggerName=iudx.resource.server.authenticator.JwtAuthenticationServiceImpl message="error : {"401":"no access provided to endpoint"}" source=rs sourceUrl=rs.iudx.org.in  exception=""
    
2022-05-02 11:52:23	
time=2022-05-02T06:22:23+0000 level="�[1;31mERROR�[m" loggerName=iudx.resource.server.authenticator.JwtAuthenticationServiceImpl message="failed - no access provided to endpoint" source=rs sourceUrl=rs.iudx.org.in  exception=""
    
2022-05-02 11:52:23	
time=2022-05-02T06:22:23+0000 level="INFO" loggerName=iudx.resource.server.authenticator.JwtAuthenticationServiceImpl message="endPoint : /ngsi-ld/v1/consumer/audit" source=rs sourceUrl=rs.iudx.org.in  exception=""
    
2022-05-02 11:52:23	
time=2022-05-02T06:22:23+0000 level="INFO" loggerName=iudx.resource.server.authenticator.JwtAuthenticationServiceImpl message="strategy : ConsumerAuthStrategy" source=rs sourceUrl=rs.iudx.org.in  exception=""
    
2022-05-02 11:52:23	
time=2022-05-02T06:22:23+0000 level="�[1;31mERROR�[m" loggerName=iudx.resource.server.authenticator.JwtAuthenticationServiceImpl message="cache call result : [fail] (RECIPIENT_FAILURE,-1) No entry for given key" source=rs sourceUrl=rs.iudx.org.in  exception=""
    
2022-05-02 11:52:23	
time=2022-05-02T06:22:23+0000 level="�[1;31mERROR�[m" loggerName=iudx.resource.server.apiserver.handlers.AuthHandler message="Error : Authentication Failure" source=rs sourceUrl=rs.iudx.org.in  exception=""

  1. With secure token with "/api" access for a secure resource, the metering api fails in production.
    The curl
curl --location --request GET 'https://rs.iudx.org.in/ngsi-ld/v1/consumer/audit?id=datakaveri.org/fb0924bef803e220e0d0bc8ceaf0a1f999442f32/rs.iudx.org.in/bengaluru-emergency-vehicles/ambulance-live&timerel=during&time=2022-03-10T14:20:00Z&endtime=2022-03-16T14:20:00Z' \
--header 'token: <secure-token>' --header 'options: count'

curl response
{"type":"urn:dx:rs:backend","title":"Bad Request","detail":"Bad Request"}

The failure logs:

    
time=2022-05-02T06:11:02+0000 level="�[1;31mERROR�[m" loggerName=iudx.resource.server.apiserver.ApiServerVerticle message="ERROR : Expecting Json from backend service [ jsonFormattingException ]" source=rs sourceUrl=rs.iudx.org.in  exception=""
    
2022-05-02 11:41:02	
time=2022-05-02T06:11:02+0000 level="�[1;31mERROR�[m" loggerName=iudx.resource.server.apiserver.ApiServerVerticle message="Table reading failed." source=rs sourceUrl=rs.iudx.org.in  exception=""
    
2022-05-02 11:41:02	
time=2022-05-02T06:11:02+0000 level="�[1;31mERROR�[m" loggerName=iudx.resource.server.apiserver.ApiServerVerticle message="Fail msg Timed out after waiting 30000(ms) for a reply. address: __vertx.reply.fcbe896a-4539-4bf0-9d5b-8484938927d9, repliedAddress: iudx.rs.metering.service" source=rs sourceUrl=rs.iudx.org.in  exception=""
    
2022-05-02 11:40:34	
time=2022-05-02T06:10:34+0000 level="�[1;31mERROR�[m" loggerName=io.vertx.core.impl.ContextImpl message="Unhandled exception" source=rs sourceUrl=rs.iudx.org.in  exception=" java.util.NoSuchElementException: Column (metering.rsauditingtable.col0) does not exist|	at io.vertx.sqlclient.Row.getInteger(Row.java:110) ~[fatjar.jar:?]|	at iudx.resource.server.metering.MeteringServiceImpl.lambda$executeCountQuery$5(MeteringServiceImpl.java:195) ~[fatjar.jar:?]|	at io.vertx.core.impl.future.FutureImpl$1.onSuccess(FutureImpl.java:90) ~[fatjar.jar:?]|	at io.vertx.core.impl.future.FutureImpl$ListenerArray.onSuccess(FutureImpl.java:230) ~[fatjar.jar:?]|	at io.vertx.core.impl.future.FutureBase.emitSuccess(FutureBase.java:62) ~[fatjar.jar:?]|	at io.vertx.core.impl.future.FutureImpl.tryComplete(FutureImpl.java:179) ~[fatjar.jar:?]|	at io.vertx.core.impl.future.Composition$1.onSuccess(Composition.java:62) ~[fatjar.jar:?]|	at io.vertx.core.impl.future.FutureImpl$ListenerArray.onSuccess(FutureImpl.java:230) ~[fatjar.jar:?]|	at io.vertx.core.impl.future.FutureBase.lambda$emitSuccess$0(FutureBase.java:54) ~[fatjar.jar:?]|	at io.vertx.core.impl.EventLoopContext.execute(EventLoopContext.java:83) ~[fatjar.jar:?]|	at io.vertx.core.impl.DuplicatedContext.execute(DuplicatedContext.java:199) ~[fatjar.jar:?]|	at io.vertx.core.impl.future.FutureBase.emitSuccess(FutureBase.java:51) ~[fatjar.jar:?]|	at io.vertx.core.impl.future.FutureImpl.tryComplete(FutureImpl.java:179) ~[fatjar.jar:?]|	at io.vertx.core.impl.future.PromiseImpl.tryComplete(PromiseImpl.java:23) ~[fatjar.jar:?]|	at io.vertx.sqlclient.impl.QueryResultBuilder.tryComplete(QueryResultBuilder.java:102) ~[fatjar.jar:?]|	at io.vertx.sqlclient.impl.QueryResultBuilder.tryComplete(QueryResultBuilder.java:35) ~[fatjar.jar:?]|	at io.vertx.core.Promise.complete(Promise.java:66) ~[fatjar.jar:?]|	at io.vertx.core.Promise.handle(Promise.java:51) ~[fatjar.jar:?]|	at io.vertx.core.Promise.handle(Promise.java:29) ~[fatjar.jar:?]|	at io.vertx.sqlclient.impl.command.CommandResponse.fire(CommandResponse.java:46) ~[fatjar.jar:?]|	at io.vertx.sqlclient.impl.SocketConnectionBase.handleMessage(SocketConnectionBase.java:258) ~[fatjar.jar:?]|	at io.vertx.pgclient.impl.PgSocketConnection.handleMessage(PgSocketConnection.java:94) ~[fatjar.jar:?]|	at io.vertx.sqlclient.impl.SocketConnectionBase.lambda$init$0(SocketConnectionBase.java:96) ~[fatjar.jar:?]|	at io.vertx.core.net.impl.NetSocketImpl.lambda$new$1(NetSocketImpl.java:97) ~[fatjar.jar:?]|	at io.vertx.core.streams.impl.InboundBuffer.handleEvent(InboundBuffer.java:240) ~[fatjar.jar:?]|	at io.vertx.core.streams.impl.InboundBuffer.write(InboundBuffer.java:130) ~[fatjar.jar:?]|	at io.vertx.core.net.impl.NetSocketImpl.lambda$handleMessage$9(NetSocketImpl.java:390) ~[fatjar.jar:?]|	at io.vertx.core.impl.EventLoopContext.emit(EventLoopContext.java:52) ~[fatjar.jar:?]|	at io.vertx.core.impl.ContextImpl.emit(ContextImpl.java:294) ~[fatjar.jar:?]|	at io.vertx.core.impl.EventLoopContext.emit(EventLoopContext.java:24) ~[fatjar.jar:?]|	at io.vertx.core.net.impl.NetSocketImpl.handleMessage(NetSocketImpl.java:389) ~[fatjar.jar:?]|	at io.vertx.core.net.impl.ConnectionBase.read(ConnectionBase.java:153) ~[fatjar.jar:?]|	at io.vertx.core.net.impl.VertxHandler.channelRead(VertxHandler.java:154) ~[fatjar.jar:?]|	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[fatjar.jar:?]|	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[fatjar.jar:?]|	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[fatjar.jar:?]|	at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436) ~[fatjar.jar:?]|	at io.vertx.pgclient.impl.codec.PgEncoder.lambda$write$0(PgEncoder.java:88) ~[fatjar.jar:?]|	at io.vertx.pgclient.impl.codec.PgCommandCodec.handleReadyForQuery(PgCommandCodec.java:139) ~[fatjar.jar:?]|	at io.vertx.pgclient.impl.codec.PgDecoder.decodeReadyForQuery(PgDecoder.java:235) ~[fatjar.jar:?]|	at io.vertx.pgclient.impl.codec.PgDecoder.channelRead(PgDecoder.java:95) ~[fatjar.jar:?]|	at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251) ~[fatjar.jar:?]|	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[fatjar.jar:?]|	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[fatjar.jar:?]|	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[fatjar.jar:?]|	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[fatjar.jar:?]|	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[fatjar.jar:?]|	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[fatjar.jar:?]|	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[fatjar.jar:?]|	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[fatjar.jar:?]|	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) ~[fatjar.jar:?]|	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) ~[fatjar.jar:?]|	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) ~[fatjar.jar:?]|	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[fatjar.jar:?]|	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[fatjar.jar:?]|	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[fatjar.jar:?]|	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[fatjar.jar:?]|	at java.lang.Thread.run(Unknown Source) [?:?]|"
    
2022-05-02 11:40:32	
time=2022-05-02T06:10:32+0000 level="INFO" loggerName=iudx.resource.server.authenticator.JwtAuthenticationServiceImpl message="User access is allowed." source=rs sourceUrl=rs.iudx.org.in  exception=""
    
2022-05-02 11:40:32	
time=2022-05-02T06:10:32+0000 level="INFO" loggerName=iudx.resource.server.authenticator.authorization.ConsumerAuthStrategy message="allowed access : ["api","sub","file"]" source=rs sourceUrl=rs.iudx.org.in  exception=""
    
2022-05-02 11:40:32	
time=2022-05-02T06:10:32+0000 level="INFO" loggerName=iudx.resource.server.authenticator.authorization.ConsumerAuthStrategy message="authorization request for : /ngsi-ld/v1/consumer/audit with method : GET" source=rs sourceUrl=rs.iudx.org.in  exception=""
    
2022-05-02 11:40:32	
time=2022-05-02T06:10:32+0000 level="INFO" loggerName=iudx.resource.server.authenticator.JwtAuthenticationServiceImpl message="endPoint : /ngsi-ld/v1/consumer/audit" source=rs sourceUrl=rs.iudx.org.in  exception=""
    
2022-05-02 11:40:32	
time=2022-05-02T06:10:32+0000 level="INFO" loggerName=iudx.resource.server.authenticator.JwtAuthenticationServiceImpl message="strategy : ConsumerAuthStrategy" source=rs sourceUrl=rs.iudx.org.in  exception=""
    
2022-05-02 11:40:32	
time=2022-05-02T06:10:32+0000 level="�[1;31mERROR�[m" loggerName=iudx.resource.server.authenticator.JwtAuthenticationServiceImpl message="cache call result : [fail] (RECIPIENT_FAILURE,-1) No entry for given key" source=rs sourceUrl=rs.iudx.org.in  exception=""

Reason: In production, the rs uses different immudb db than metering.
possible solution: In the code it has to take the db name from config

"meteringDatabaseName": "",
to form exact column name . See at
public static final String COUNT_COLUMN_NAME = "(metering.rsauditingtable.col0)";

  1. If the consumer want to get auditiing details of other rs servers like gis, di from resource access server apis https://rs.iudx.org.in/apis#operation/consumer%20search .
    The current logic in the code would not allow that i feel. Because the gis inserts audit data to a different table called gisauditingtable https://github.com/datakaveri/iudx-gis-interface/blob/71888173aadf6a7f25e7fcf8702c26648c5a4249/src/main/java/iudx/gis/server/metering/util/Constants.java#L25 and resource access server consumer/provider search audit api just reads data from rsauditingtable . Similarly for di https://github.com/datakaveri/iudx-data-ingestion-server/blob/5d17699c39faf5d23aac7e6681607ac99fd76b95/src/main/java/iudx/data/ingestion/server/metering/util/Constants.java#L20.

Remove API key field from create subscription response if user already exist in system.

Currently, creating a subscription returns the API key field in response to every request. for the first time when a user doesn't exist in the system it will returns password but if the user exists in the system then It returns the hash of the existing password [ since password is not saved in the system].

This hash of password as API key for existing users is creating confusion among users as they are trying to use this hash as API key for consuming messages.

Steps to reproduce -

  • Create a subscription for new user, it will create a subscription and responds with
{
  "username": "<<username>>",
  "apiKey": 123456,
  "id": "<<id>>",
  "URL": "<<databroker id>>",
  "port": <<port>>,
  "vHost": "<<vhost>>"
}
  • Now, try to create another subscription for the same user. the API key now is not the actual password but the hash of password.
{
  "username": "<<username>>",
  "apiKey": hash(123456),
  "id": "<<id>>",
  "URL": "<<databroker id>>",
  "port": <<port>>,
  "vHost": "<<vhost>>"
}

Possible resolution :

  • Remove API key field from the response for an already existing user.

1.5. Broadcast Unique Attributes Information

As mentioned in section 1.1.1, the onboarder will specify the unique id and the unique attribute with the Admin API. This information will be stored in the database which shall be used as the source of truth. The same shall be published (broadcasted) to parties who are interested in it using the internal RabbitMQ serve

Is there any missing configuration need to be perform to setup resource server?

Hi Team,

I have setup Resource Server(3.5.0) on my local by following steps and configuration mentioned here.

And to install dependency stack i used Deployment Repo.

So finally my resource server instace is up and able to access api but not checked with real example just able to access APIs for now.

My question is, Do we need to follow configration steps other then mentioned in above links or these are sufficent?

I am asking this question as i did same for auth and catalogue server but after discussion with iudx team, found there are some other missing configurations need to be perform before using respective APIs even i followed installation quide properly.

Thanks
Deepak Kumar

1.2. Audit / Metering API

  • Allow a consumer to query the Metering / Audit API usage using temporal, resource-id, consumer-id queries
  • Allow a provider to query the Metering / Audit API usage using temporal, resource-id queries

1.8. Reset Password API

The current response for the reset API only has the attribute “password”. We need to change the response to an urn-based one, with consumer-id added to it.

Need More Information to Configure Resource Server

Hi Team,

I am trying to start Resource Server v3.5.0 on my local and trying to config it by following https://github.com/datakaveri/iudx-resource-server/blob/v3.5.0/SETUP.md

I provided all information for verticle iudx.resource.server.databroker.DataBrokerVerticle as mentioned in above link.

But when i am trying to run my instance using command "mvn clean compile exec:java@resource-server" then i am getting error as below logs-

level="ERROR" loggerName=iudx.resource.server.databroker.listeners.RevokeClientQListener message="Rabbit client startup failed.java.lang.IllegalStateException: Invalid configuration: 'virtualHost' must be non-null." source=rs sourceUrl=http://localhost exception=""

Looks like we need to add virtualHost property for this virticle. I am not sure what should be value for virtualHost can you please guide me for same.

And can you guys please provide me a complete sample config file with description about the values to be provide.

Thanks
Deepak Kumar

[invalidAuthorizationToken] Getting auth error when trying to access public resource

Hi there. I am new to IUDX at the moment.

I was trying to make GET call (using bearer auth token) to datakaveri.org/04a15c9960ffda227e9546f3f46e629e1fe4132b/rs.iudx.org.in/pune-env-aqm/f36b4669-628b-ad93-9970-f9d424afbf75 which appears to be a public resource.

It is giving invalidAuthorizationToken error to me. The exact request/response is:

request:

GET https://rs.iudx.org.in/ngsi-ld/v1/entities/?id=datakaveri.org/04a15c9960ffda227e9546f3f46e629e1fe4132b/rs.iudx.org.in/pune-point-of-interests/pa-locations

response:

{
    "type": "urn:dx:rs:invalidAuthorizationToken",
    "title": "Not Authorized",
    "detail": "Not Authorized"
}

Am I understanding this incorrectly? I mean, do I need to get access to public resources as well?
@swaminathanvasanth @rraks, could you take a quick look please?

1.18. Subscription API flow changes

For every create subscription API call, we need to update the user permissions.
STEP:1 Get the list of permissions of the user
Example
curl --location --request GET 'http://localhost:15672/api/users/15c7506f-c800-48d6-adeb-0542b03947c6/permissions'
--header 'Authorization: Basic Z3Vlc3Q6Z3Vlc3Q='
--data-raw ''

Response
[
{
"user": "15c7506f-c800-48d6-adeb-0542b03947c6",
"vhost": "IUDX",
"configure": "",
"write": "",
"read": "15c7506f-c800-48d6-adeb-0542b03947c6/aqm-application"
}
]

Take the “read” attribute value. Add a “|” followed by the queue name obtained in the request and proceed to the next step. Make sure to keep the write and configure permissions as is.

STEP:2 Update the permissions
Example
curl --location --request PUT 'http://localhost:15672/api/permissions/IUDX/15c7506f-c800-48d6-adeb-0542b03947c6'
--header 'Authorization: Basic Z3Vlc3Q6Z3Vlc3Q='
--header 'Content-Type: application/json'
--data-raw '{
"configure":"",
"write": "",
"read": "15c7506f-c800-48d6-adeb-0542b03947c6/aqm-application|15c7506f-c800-48d6-adeb-0542b03947c6/itms-application"
}'

1.13. Remove a subscription

We need to incorporate a cron-job routine which should remove the subscription of the user (from the user queue) if the last-refreshed-time of an id, entities is “>” the current time.
It is assumed that all the information is populated in the database.

Elastic client issue

Issue

The rs query requests are working alternatively i.e. elastic client and in turn rs is able to return data for only alternate queries. Was it not working for a specific request or for all requests which involved elastic client. @swaminathanvasanth ?

Find the error Logs (important ones, attached detailed log elastic-client-exception.txt)

    time=2022-02-15T08:54:08+0000 level="INFO" loggerName=iudx.resource.server.database.archives.
QueryDecoder message="Info: Temporal Search block" source=rs sourceUrl=https://rs.iudx.org.in exception=""
    time=2022-02-15T08:54:08+0000 level="�[1;31mERROR�[m" loggerName=io.vertx.core.impl.ContextImpl message="Unhandled exception" source=rs sourceUrl=https://rs.iudx.org.in exception=" java.lang.StringIndexOutOfBoundsException: begin -1, end 0, length 55|	at java.lang.String.checkBoundsBeginEnd(Unknown Source) ~[?:?]|	at java.lang.String.substring(Unknown Source) ~[?:?]|	at iudx.resource.server.database.archives.ElasticClient$2.onFailure(ElasticClient.java:161) ~[fatjar.jar:?]|	at org.elasticsearch.client.RestClient.performRequestAsync(RestClient.java:317) ~[fatjar.jar:?]|	at iudx.resource.server.database.archives.ElasticClient.countAsync(ElasticClient.java:124) ~[fatjar.jar:?]|	at iudx.resource.server.database.archives.DatabaseServiceImpl.searchQuery(DatabaseServiceImpl.java:124) ~[fatjar.jar:?]|	at iudx.resource.server.database.archives.DatabaseServiceVertxProxyHandler.handle(DatabaseServiceVertxProxyHandler.java:124) ~[fatjar.jar:?]|	at iudx.resource.server.database.archives.DatabaseServiceVertxProxyHandler.handle(DatabaseServiceVertxProxyHandler.java:52) ~[fatjar.jar:?]|	at io.vertx.core.impl.EventLoopContext.emit(EventLoopContext.java:52) ~[fatjar.jar:?]|	at io.vertx.core.impl.DuplicatedContext.emit(DuplicatedContext.java:194) ~[fatjar.jar:?]|	at io.vertx.core.eventbus.impl.MessageConsumerImpl.dispatch(MessageConsumerImpl.java:177) ~[fatjar.jar:?]|	at io.vertx.core.eventbus.impl.HandlerRegistration$InboundDeliveryContext.next(HandlerRegistration.java:163) ~[fatjar.jar:?]|	at io.vertx.core.eventbus.impl.HandlerRegistration$InboundDeliveryContext.dispatch(HandlerRegistration.java:128) ~[fatjar.jar:?]|	at io.vertx.core.impl.AbstractContext.dispatch(AbstractContext.java:107) ~[fatjar.jar:?]|	at io.vertx.core.eventbus.impl.HandlerRegistration.dispatch(HandlerRegistration.java:104) ~[fatjar.jar:?]|	at io.vertx.core.eventbus.impl.MessageConsumerImpl.deliver(MessageConsumerImpl.java:183) ~[fatjar.jar:?]|	at io.vertx.core.eventbus.impl.MessageConsumerImpl.doReceive(MessageConsumerImpl.java:168) ~[fatjar.jar:?]|	at io.vertx.core.eventbus.impl.HandlerRegistration.lambda$receive$0(HandlerRegistration.java:54) ~[fatjar.jar:?]|	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) ~[fatjar.jar:?]|	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) ~[fatjar.jar:?]|	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) ~[fatjar.jar:?]|	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[fatjar.jar:?]|	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[fatjar.jar:?]|	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[fatjar.jar:?]|	at java.lang.Thread.run(Unknown Source) [?:?]|"
    time=2022-02-15T08:54:08+0000 level="�[1;31mERROR�[m" loggerName=iudx.resource.server.database.archives.ElasticClient message="Request cannot be executed; I/O reactor status: STOPPED" source=rs sourceUrl=https://rs.iudx.org.in exception=""

Troubleshooting

  1. Restarting the pod made it work. That is alternate search query did not fail and there was no exception.

Possible Issue/Solution

  1. The application code corresponding to this error is happening in ElasticClient.java file during performRequestAsync on Failure method . There is an unhandled exception happening during some string related operation. This is probably in turn causing a known elastic client issue - Request cannot be executed; I/O reactor status stopped . This elastic client issue causes the connection to be closed and subsequent requests will fail with the error reported as in the second log and there by the request will fail and no data is returned.
    In current-deployment branch, at the code
    String error = e.getMessage().substring(e.getMessage().indexOf("{"),

and at current master branch at

String error = e.getMessage().substring(e.getMessage().indexOf("{"),

3.The workaround suggested is to ensure that no exceptions are thrown out of your onFailure methods.
4. What caused the the request to go to performRequestAsync on Failure method ? Most probably, some connection issue with elasticsearch initially that caused failure which in turn likely triggered known elastic issue through string exception in onFailure method and then causing cyclical problem.

PS: The logs are obtained from loki through grafana, there might some reordering (as many have same timestamp).

Some URN inconsistencies

Some URN inconsistencies I found while testing

  1. MethodNotAllowed for not proper method requested, eg: requesting apidocs using TRACK method, refer
    curl -XTRACK https://rs.iudx.org.in/apis -vvv
    Response : response code : 404, json recieved: {"type":"urn:dx:rs:general","title":"Not Found","detail":"Not Found"}
    Expected : It should be something like response code : 405, json-recieved {"type":"urn:dx:rs:MethodNotAllowed","title":"Method Not Allowed","detail":"Specified method is invalid for this resource"}
  2. Latest data search/spatial search
    json response: { "type": 200, "title": "Success", "results": [ ] }
    expected json response must be like :
{
"type": "urn:dx:rs:Success",
"title":"",
"results": [
{
"id": "resource-id-xyz-abc"
}
]
}
  1. Capitalisation of first letter in type . Eg: "type":"urn:dx:rs:general" should be "type":"urn:dx:rs:General"
    Please do refer the urn response defined in draft BIS Standard: IS 18003 (Part 2):2021 Doc No: LITD 28 (17249), Unified Data Exchange Part 2: API specifications.
    @kailash @swaminathanvasanth

No Content for Latest APIs for public resources

Response is 204 for Latest queries on any public resources on the catalogue.
Below is a cURL for example:
curl --location --request GET 'https://rs.iudx.org.in/ngsi-ld/v1/entities/suratmunicipal.org/6db486cb4f720e8585ba1f45a931c63c25dbbbda/rs.iudx.org.in/surat-itms-additional-info/surat-itms-depot-info'
--header 'token: eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJzdWIiOiI0NGI2MGVmNC1hMjZmLTRlOTctOWEzOS05OGIyNzg0M2EyMmIiLCJpc3MiOiJhdXRob3JpemF0aW9uLml1ZHgub3JnLmluIiwiYXVkIjoicnMuaXVkeC5vcmcuaW4iLCJleHAiOjE2Mzg5MTI4NDEsImlhdCI6MTYzODg2OTY0MSwiaWlkIjoicnM6cnMuaXVkeC5vcmcuaW4iLCJyb2xlIjoiY29uc3VtZXIiLCJjb25zIjp7fX0.EyWw14TkO_0bhJfYZmdZ8Nbz_OJPb93ty37SW3TJ3f_y10nZUuuFWtIv-jz5ZdpQVduO0Gd-riWQtJHUjhsvxQ'

1.4. Read Unique Attributes from Database

The required attribute information for understanding the unique attribute to be used to query for the latest data is now read from a configuration file. With this requirement, we need to read the unique-id and unique attribute from a database whenever required.

Database to use: PostgreSQL
Database schema: id,unique-attribute

Subscription at a resource level is not binding to proper routing key

Example

For resource level, it is binding to vmc.gov.in/ae95ac0975a80bd4fd4127c68d3a5b6f141a3436/rs.iudx.org.in/vadodara-emergency-vehicles/ambulance-live instead of vmc.gov.in/ae95ac0975a80bd4fd4127c68d3a5b6f141a3436/rs.iudx.org.in/vadodara-emergency-vehicles/.ambulance-live

1.1.2. Admin API for updating token invalidation

Details about the API is shown below

Description This API shall allow IUDX Auth Admins to invalidate a token
Endpoint /admin/revokeToken
Method POST
Header JWT Token (Admin Token)
Body { "sub": "123e4567-e89b-12d3-a456-426614174000"}

The API should perform the following:

  • Create the information in PostgresQL

  • Schema for PSQL (sub, time)

  • After storing, do an upsert when auth server calls again

  • Steps: Decode-->LookUp-->Validate

  • Flow to check

  1. table.sub == jwt.sub
  2. table.iat < jwt.iat
  • Broadcast this information over internal RabbitMQ (sub, time)

[Refactoring] refactor config files for correct data types according to values.

Currently integer values are also mentioned as string types. This sometimes lead to cast exceptions, refactor current configs for proper types.

ex :

"catServerPort": "443",
.
.

"dataBrokerManagementPort": "30042",
"connectionTimeout": "6000",
"requestedHeartbeat": "60",
"handshakeTimeout": "6000",
"requestedChannelMax": "5",
"networkRecoveryInterval": "500",

It will be better to mention them as :

"catServerPort": 443,
.
.

"dataBrokerManagementPort": 30042,
"connectionTimeout": 6000,
"requestedHeartbeat": 60,
"handshakeTimeout": 6000,
"requestedChannelMax": 5,
"networkRecoveryInterval": 500,

to avoid any need for casting while reading and eliminate any cast exceptions scenarios.

1.12. Store subscription information

We need to incorporate a routine where the subscription information is stored in the database. It should contain the following information

  • id (queue-name)
  • entities (routing-key)
  • last-refreshed-time

1.3. ImmuDB client

ImmuDB client connection gets terminated within 2 hours. Need to understand if it is a k8s issue. If not, restarting the connection every hour might fix it.

1.6. Broadcast token invalidation Information

With the Admin API as mentioned in section 1.1.2, the auth admin will specify the token to be invalidated. This information will be stored in the database which shall be used as the source of truth. The same shall be published (broadcasted) to parties who are interested in it using the internal RabbitMQ server.

1.17. Ingestion API flow changes

For every create ingestion API call, we need to update the user permissions.

STEP:1 Get the list of permissions of the user

Example
curl --location --request GET 'http://localhost:15672/api/users/2563e6d4-5884-40e8-9d9f-e84ee956298b/permissions'
--header 'Authorization: Basic Z3Vlc3Q6Z3Vlc3Q='
--data-raw ''

Response
[
{
"user": "2563e6d4-5884-40e8-9d9f-e84ee956298b",
"vhost": "IUDX",
"configure": "",
"write": "iisc.ac.in/89a36273d77dac4cf38114fca1bbe64392547f86/rs.iudx.io/surat-itms-realtime-information",
"read": ""
}
]

Take the “write” attribute value. Add a “|” followed by the entities obtained in the request and proceed to the next step. Make sure to keep the read and configure permissions as is.

STEP:2 Update the permissions

Example
curl --location --request PUT 'http://localhost:15672/api/permissions/IUDX/2563e6d4-5884-40e8-9d9f-e84ee956298b'
--header 'Authorization: Basic Z3Vlc3Q6Z3Vlc3Q='
--header 'Content-Type: application/json'
--data-raw '{
"configure":"",
"write": "iisc.ac.in/89a36273d77dac4cf38114fca1bbe64392547f86/rs.iudx.io/surat-itms-realtime-information|iisc.ac.in/89a36273d77dac4cf38114fca1bbe64392547f86/rs.iudx.io/pune-env-flood",
"read": ""
}'

getting "invalidTemporalParam" for temporal query of more then 10 days

Currently,

When a consumer queries the resource using temporal API - say "between" query and the start_date and end_date has the difference of more then 10days, the consumer gets the below as response:

image

Suggestion: Instead of consumer getting a message which says "urn:dx:rs:invalidTemporalParam" in response, can we have a response which mentions that the #days cannot be more then 10 for temporal data or some other relevant URN.
This will save some confusion at the consumer's side.

API documentation

Need to update the API endpoints, responses in the API documentation

Catalogue Response

Need to update the catalogue response check to new URN changes appropriately.

1.1.1. Admin API for updating item-id and unique attribute of a resource

For details about the API refer to the SRS document (section 1.1.1)

For Create, Update and Delete, the API should perform the following:

  • Create, Update or Delete the information in PostgresQL
  • Broadcast this information over internal RabbitMQ

Information on the exchange to publish is as described in the SRS document (section 1.5)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.