GithubHelp home page GithubHelp logo

steph-koopmanschap / jasma Goto Github PK

View Code? Open in Web Editor NEW
8.0 1.0 6.0 53.37 MB

JASMA - Just Another Social Media App

License: Apache License 2.0

Shell 0.96% JavaScript 46.72% CSS 2.18% Dockerfile 0.72% Python 28.71% HTML 20.70%
socialmedia socialmediaapp socialmediaplatform socialmediawebsite socialnetwork

jasma's Introduction

JASMA - Just Another Social Media App

Table of Contents

Why JASMA?

JASMA is an open source social media platform. The purpose of JASMA is for learning how to code on a large collaborative project.

Screenshots

Jasma screenshot0 Jasma screenshot1

Contributing

Filling out these anonymous surveys will help us a lot to improve JASMA.
Most important features in Social Media

Social Media Improvement Survey

If you like this project please give it a star :D

The discord group of JASMA:
https://discord.gg/7xHzmmVW4w

Everyone is welcome to contribute to this open source project.
It doesn't matter if you are a complete beginner or a senior developer.
To contribute join the Discord group for direct access.
Or feel free to create PR, issue, or Fork of this project.
You're contributions will be recognized.

Currently looking for the following contributions:

  • Frontend design (UI/UX).
  • Logo and graphics art.
  • Frontend development.
  • Backend development.
  • Android App development. (React Native)
  • iOS App development.
  • DevOps.
  • Cybersecurity.
  • Testing and bug reporting.
  • Ideas and inspiration.
  • Legal advice.
  • Documentation writers.
  • Spreading the word :)

Thank you.

How to contribute

The development branch is usually the most updated and most recent version of the project.
To contribute code, create a new branch from the development branch with an appropriate name for the bug or feature you are working on.
Then create a PR for your branch to merge into the development (dev) branch.

You may also directly contact the project manager on Discord @ Greylien#8501

Documentation

See DOCS.md

Version history

See Version history

The Purpose of JASMA

In the interest of fostering a learning environment for developers of all experience levels, the purpose of JASMA, has been established to meet the following six points:

1. Dedicated Learning

The goal of Jasma is to facilitate the acquisition of new skills, which can extend beyond web development to other areas such as devops, UI/graphics design, data science, cybersecurity, AI, project management, systems design, marketing, and more. The dedicated aspect of this learning means that it is intended to achieve practical goals that can be applied in production projects.

2. Collaboration

Collaboration is an important component of the learning process, and it is especially relevant to software development, which relies heavily on communication and teamwork to produce a functioning system. Multiple perspectives on a given problem can be useful in identifying the best solutions, and collaboration can inspire motivation and support within the group. It is important to maintain a blameless culture, in which members work together to solve problems without assigning blame.

3. Creativity

Creativity is an essential part of learning and can serve as an inspiration for further learning opportunities. JASMA encourages creative thinking and experimentation, allowing members to add novel features to the project that may be experimental, unconventional, or innovative. For example, custom colors, fonts, and page/post customizations were discussed as potential features, as well as the ability to write posts in markup.

4. Mentorship and Guidance

Mentorship and guidance can help accelerate the learning process and make it more enjoyable. Members can benefit from having a mentor or guide who can provide answers to their questions and help steer them in the right direction. Mentorship does not necessarily imply that the mentor is more experienced, but rather that they have a different perspective to offer.

5. Challenging and Inspiring

Learning to create a social media platform or any large software project is a difficult and challenging task. JASMA aims to challenge its members and inspire them to develop their skills with confidence. Breaking down complex problems into smaller, more manageable steps and learning from mistakes is an important part of the learning process.

6. Fair Compensation

In the event that JASMA gains a significant user base and generates revenue beyond operational costs, fair compensation for active members will be considered. This compensation could include educational resources such as CodeCademy subscriptions, Coursera, Udemy, Google courses, certificates, books, and other resources that foster learning opportunities.

jasma's People

Contributors

fflorent-01 avatar johnkat-mj avatar junifruit avatar l4lilul3lo avatar steph-koopmanschap avatar stephk-mmdc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

jasma's Issues

implement CSRF in client and backend

  • Add CSRF to POST, PUT, DELETE methods in backend (or at least the ones that required)
  • Make sure CSRF token is send from frontend to backend in all tags.

Bio page does not work

When browsing to the bio page. There is an error (see screenshot).
/user/[username]/bio
At least one user needs to be registered / exist to be able to visit the profile page and go to the bio page. (click "about") on the profile page. You can go to someone's profile page by clicking their username that appears in a post or comment.

image

Clicking "See Followers" should open Modal

On the profile page when you click "See followers" it should open a modal which renders the following components as children:

<FollowersList userID={data ? data?.user_id : ""} />
<FolloweesList userID={data ? data?.user_id : ""} />

The code is in /next/pages/user/[username].js
The modal code is in /next/components/Modal.js

Currently nothing happens when one clicks "See followers"

If a user creates a post without an img the file url is undefined.

See screenshot.
When a post is auto-generated the file_url is empty or null. This is how it should be for a post without an image.
If a user uploads a post without an image the file_url in the database becomes
localhost:5000/media/posts/undefined/ even tho the file_url should be empty.

image

How do we deal with unused hashtags?

A hashtag in the database can either be linked to a post or not linked to any post (orphan hashtag).
The system needs to check/scan the orphan hashtags and then delete them.

What are the requirements for deleting an unused hashtag?

  • Periodic scan?
  • Or when the combined unused hashtags are above a certain data size limit?

Analyse and implement rate limiters

To reduce overloading the server and prevent potential exploits it is important to think about rate limiting the API.

  1. Each API end-point needs to be looked at for a potential rate limiter.
  2. As well as API routers and the global API.
  3. Each rate limiter needs to be checked for its appropriate parameters. (How many requests per time frame is allowed)?
  4. The rate limiters need to be implemented.
  5. The rate limiters need to be tested.

Info on rate limiters by Google
https://cloud.google.com/architecture/rate-limiting-strategies-techniques
https://cloud.google.com/armor/docs/rate-limiting-overview

Info on rate limiters by Facebook
https://developers.facebook.com/docs/graph-api/overview/rate-limiting
https://developers.facebook.com/docs/marketing-apis/rate-limiting

There are many factors to consider when developing a rate limiter for the real world. For example, in many use cases, a rate limiter is in the most critical path of an application. It is important in those situations to think through all the failure scenarios and consider the performance implications of placing it in such a critical path.

Gathering Requirements and Key Considerations

Before diving headfirst into building our rate limiter, it is important to understand our goals. This starts with gathering requirements and considering key design factors. The requirements can vary based on the nature of our service and the use cases we’re trying to support.

     
There are many factors to consider when developing a rate limiter for the real world. For example, in many use cases, a rate limiter is in the most critical path of an application. It is important in those situations to think through all the failure scenarios and consider the performance implications of placing it in such a critical path.

Gathering Requirements and Key Considerations
Before diving headfirst into building our rate limiter, it is important to understand our goals. This starts with gathering requirements and considering key design factors. The requirements can vary based on the nature of our service and the use cases we’re trying to support.

First, we should consider the nature of our service. Is it a real-time, latency-sensitive application or an asynchronous job where accuracy and reliability are more important than real-time performance? This can also guide our decisions around how we handle rate limit violations. Should our system immediately reject any further requests, or should it queue them and process them as capacity becomes available?

Next, we should consider the behavior of our clients. What are their average and peak request rates? Are their usage patterns predictable, or will there be significant spikes in traffic? This analysis is useful for creating rate limiting rules that protect our system without hindering legitimate users.

Then, consider the scale and performance needs. We need to understand the volume of requests we anticipate and set the target latency of our system. Is the system serving millions of customers making billions of requests, or are we dealing with a smaller number? Is latency critical, or can we sacrifice some speed for enhanced reliability?

Our rate limiting policy also needs careful consideration. Are we limiting based on the number of requests per unit of time, size of requests, or some other criteria? Is the limit set per client, or shared across all clients? Are there different tiers of clients with different limits? Is the policy strict, or does it allow for bursting?

Some rate limiters will have persistence requirements. For a long rate limiting window, how do we persist the rate limit states like counters for long-term tracking? This requirement might be especially important in cases where latency is less critical than long-term accuracy, such as in an asynchronous job processing system.

Let’s consider some examples to better understand these points.

For a real-time service like a bidding platform where latency could be highly critical, we would need a rate limiter that doesn’t add significant latency to the overall request time. We will likely need to make a tradeoff between accuracy and resource consumption. A more precise rate limiter might use more memory and CPU, while a simpler one might allow occasional bursts of requests.

On the other hand, in an asynchronous job processing system that processes large batches of data analysis jobs, our rate limiter doesn’t need to enforce the limit in real-time, but must ensure that the total number of jobs submitted by each user doesn’t exceed the daily limit. In such cases, storing the rate limiting states durably, for instance in a database, might be crucial.

Rate Limiter Architecture
The main architectural question is: where should we place the rate limiter in our application stack?

This largely depends on the configuration of the application stack. Let’s examine this in more detail.

The location where we integrate our rate limiter into our application stack matters significantly. This decision will depend on the rate limiting requirements we gathered in the previous section.

In the case of typical web-based API-serving applications, we have several layers where rate limiters could potentially be placed.

CDN and Reverse Proxy
The outermost layer is the Content Delivery Network (CDN) which also functions as our application’s reverse proxy. Using a CDN to front an API service is becoming an increasingly prevalent design for production environments. For example, Cloudflare, a well-known CDN, offers some rate limiting features based on standard web request headers like IP address or other common HTTP headers.

If a CDN is already a part of the application stack, it’s a good initial defense against basic abuse. More sophisticated rate limiters can be deployed closer to the application backend to address any remaining rate limiting needs.

Some production deployments maintain their own reverse proxy, rather than a CDN. Many of these reverse proxies offer rate limiting plugins. Nginx, for example, can limit connections, request rates, or bandwidth usage based on IP address or other variables such as HTTP headers. Traefik is another example of a reverse proxy, popular within the Kubernetes ecosystem, that comes with rate limiting capabilities.

If a reverse proxy supports the desired rate limiting algorithms, it is a suitable location for a basic rate limiter.

However, be aware that large-scale deployments usually involve a cluster of reverse proxy nodes to handle the large volume of requests. This results in rate limiting states distributed across multiple nodes. It can lead to inaccuracies unless the states are synchronized. We’ll discuss this complex distributed system issue in more detail later.

API Gateway
Moving deeper into the application stack, some deployments utilize an API Gateway to manage incoming traffic. This layer can host a basic rate limiter, provided the API Gateway supports it. This allows for control over individual routes and lets us apply different rate limiting rules for different endpoints.

Amazon API Gateway is an example of this. It handles rate limiting at scale. We don’t need to worry about managing rate limiting states across nodes, as would be required with our own cluster of reverse proxy servers. A potential downside is that the rate limiting control might not be as fine-grained as we would like.

Application Framework and Middleware
If our rate limiting needs require more fine-grained identification of the resource to limit, we may need to place the rate limiter closer to the application logic. For example, if user specific attributes like subscription type need limiting, we’ll need to implement the rate limiter at this level.

In some cases, the application framework might provide rate limiting functionality via middleware or a plugin. Like in previous cases, if these functions meet our needs, this would be a suitable place for rate limiting.

This method allows for rate limiting integration within our application code as middleware. It offers customization for different use-cases and enhances visibility, but it also adds complexity to our application code and could affect performance.

Application
Finally, if necessary, we could incorporate rate limiting logic directly in the application code. In some instances, the rate limiting requirements are so specific that this is the only feasible option.

This offers the highest degree of control and visibility but introduces complexity and potential tight coupling between the rate limiting and business logic.

Like before, when operating at scale, all issues related to sharing rate limiting states across application nodes are relevant.

Rate Limiting States
Another significant architectural decision is where to store rate limiting states, such as counters. For a low-scale, simple rate limiter, keeping the states entirely in the rate limiter’s memory might be sufficient.

However, this is likely the exception rather than the rule. In most production environments, regardless of the rate limiter’s location in the application stack, there will likely be multiple rate limiter instances to handle the load, with rate limiting states distributed across nodes.

We’ve frequently referred to this as a challenge. Let’s dive into some specifics to illustrate this point.

When rate limiters are distributed across a system, it presents a problem as each instance may not have a complete view of all incoming requests. This can lead to inaccurate rate limiting.

For example, let’s assume we have a cluster of reverse proxies, each running its own rate limiter instance. Any of these proxies could handle an incoming request. This leads to distributed and isolated rate limiting states. A user might potentially exceed their rate limit if their requests are handled by multiple proxies.

In some cases, the inaccuracies might be acceptable. Referring back to our initial discussion on requirements, if our objective is to provide a basic level of protection, a simple solution allowing each instance to maintain its own states might be adequate.

However, if maintaining accurate rate limiting is a core requirement - for example, if we want to ensure fair usage of resources across all users or we have to adhere to strict rate limit policies for compliance reasons - we will need to consider more sophisticated strategies.

Centralized State Storage
One such strategy is centralized state storage. Here, instead of each rate limiter instance managing its own states, all instances interact with a central storage system to read and update states. This method does, however, come with many tradeoffs.

There are many factors to consider when developing a rate limiter for the real world. For example, in many use cases, a rate limiter is in the most critical path of an application. It is important in those situations to think through all the failure scenarios and consider the performance implications of placing it in such a critical path.

Gathering Requirements and Key Considerations
Before diving headfirst into building our rate limiter, it is important to understand our goals. This starts with gathering requirements and considering key design factors. The requirements can vary based on the nature of our service and the use cases we’re trying to support.

 	 There are many factors to consider when developing a rate limiter for the real world. For example, in many use cases, a rate limiter is in the most critical path of an application. It is important in those situations to think through all the failure scenarios and consider the performance implications of placing it in such a critical path.

Gathering Requirements and Key Considerations
Before diving headfirst into building our rate limiter, it is important to understand our goals. This starts with gathering requirements and considering key design factors. The requirements can vary based on the nature of our service and the use cases we’re trying to support.

First, we should consider the nature of our service. Is it a real-time, latency-sensitive application or an asynchronous job where accuracy and reliability are more important than real-time performance? This can also guide our decisions around how we handle rate limit violations. Should our system immediately reject any further requests, or should it queue them and process them as capacity becomes available?

Next, we should consider the behavior of our clients. What are their average and peak request rates? Are their usage patterns predictable, or will there be significant spikes in traffic? This analysis is useful for creating rate limiting rules that protect our system without hindering legitimate users.

Then, consider the scale and performance needs. We need to understand the volume of requests we anticipate and set the target latency of our system. Is the system serving millions of customers making billions of requests, or are we dealing with a smaller number? Is latency critical, or can we sacrifice some speed for enhanced reliability?

Our rate limiting policy also needs careful consideration. Are we limiting based on the number of requests per unit of time, size of requests, or some other criteria? Is the limit set per client, or shared across all clients? Are there different tiers of clients with different limits? Is the policy strict, or does it allow for bursting?

Some rate limiters will have persistence requirements. For a long rate limiting window, how do we persist the rate limit states like counters for long-term tracking? This requirement might be especially important in cases where latency is less critical than long-term accuracy, such as in an asynchronous job processing system.

Let’s consider some examples to better understand these points.

For a real-time service like a bidding platform where latency could be highly critical, we would need a rate limiter that doesn’t add significant latency to the overall request time. We will likely need to make a tradeoff between accuracy and resource consumption. A more precise rate limiter might use more memory and CPU, while a simpler one might allow occasional bursts of requests.

On the other hand, in an asynchronous job processing system that processes large batches of data analysis jobs, our rate limiter doesn’t need to enforce the limit in real-time, but must ensure that the total number of jobs submitted by each user doesn’t exceed the daily limit. In such cases, storing the rate limiting states durably, for instance in a database, might be crucial.

Rate Limiter Architecture
The main architectural question is: where should we place the rate limiter in our application stack?

This largely depends on the configuration of the application stack. Let’s examine this in more detail.

The location where we integrate our rate limiter into our application stack matters significantly. This decision will depend on the rate limiting requirements we gathered in the previous section.

In the case of typical web-based API-serving applications, we have several layers where rate limiters could potentially be placed.

CDN and Reverse Proxy
The outermost layer is the Content Delivery Network (CDN) which also functions as our application’s reverse proxy. Using a CDN to front an API service is becoming an increasingly prevalent design for production environments. For example, Cloudflare, a well-known CDN, offers some rate limiting features based on standard web request headers like IP address or other common HTTP headers.

If a CDN is already a part of the application stack, it’s a good initial defense against basic abuse. More sophisticated rate limiters can be deployed closer to the application backend to address any remaining rate limiting needs.

Some production deployments maintain their own reverse proxy, rather than a CDN. Many of these reverse proxies offer rate limiting plugins. Nginx, for example, can limit connections, request rates, or bandwidth usage based on IP address or other variables such as HTTP headers. Traefik is another example of a reverse proxy, popular within the Kubernetes ecosystem, that comes with rate limiting capabilities.

If a reverse proxy supports the desired rate limiting algorithms, it is a suitable location for a basic rate limiter.

However, be aware that large-scale deployments usually involve a cluster of reverse proxy nodes to handle the large volume of requests. This results in rate limiting states distributed across multiple nodes. It can lead to inaccuracies unless the states are synchronized. We’ll discuss this complex distributed system issue in more detail later.

API Gateway
Moving deeper into the application stack, some deployments utilize an API Gateway to manage incoming traffic. This layer can host a basic rate limiter, provided the API Gateway supports it. This allows for control over individual routes and lets us apply different rate limiting rules for different endpoints.

Amazon API Gateway is an example of this. It handles rate limiting at scale. We don’t need to worry about managing rate limiting states across nodes, as would be required with our own cluster of reverse proxy servers. A potential downside is that the rate limiting control might not be as fine-grained as we would like.

Application Framework and Middleware
If our rate limiting needs require more fine-grained identification of the resource to limit, we may need to place the rate limiter closer to the application logic. For example, if user specific attributes like subscription type need limiting, we’ll need to implement the rate limiter at this level.

In some cases, the application framework might provide rate limiting functionality via middleware or a plugin. Like in previous cases, if these functions meet our needs, this would be a suitable place for rate limiting.

This method allows for rate limiting integration within our application code as middleware. It offers customization for different use-cases and enhances visibility, but it also adds complexity to our application code and could affect performance.

Application
Finally, if necessary, we could incorporate rate limiting logic directly in the application code. In some instances, the rate limiting requirements are so specific that this is the only feasible option.

This offers the highest degree of control and visibility but introduces complexity and potential tight coupling between the rate limiting and business logic.

Like before, when operating at scale, all issues related to sharing rate limiting states across application nodes are relevant.

Rate Limiting States
Another significant architectural decision is where to store rate limiting states, such as counters. For a low-scale, simple rate limiter, keeping the states entirely in the rate limiter’s memory might be sufficient.

However, this is likely the exception rather than the rule. In most production environments, regardless of the rate limiter’s location in the application stack, there will likely be multiple rate limiter instances to handle the load, with rate limiting states distributed across nodes.

We’ve frequently referred to this as a challenge. Let’s dive into some specifics to illustrate this point.

When rate limiters are distributed across a system, it presents a problem as each instance may not have a complete view of all incoming requests. This can lead to inaccurate rate limiting.

For example, let’s assume we have a cluster of reverse proxies, each running its own rate limiter instance. Any of these proxies could handle an incoming request. This leads to distributed and isolated rate limiting states. A user might potentially exceed their rate limit if their requests are handled by multiple proxies.

In some cases, the inaccuracies might be acceptable. Referring back to our initial discussion on requirements, if our objective is to provide a basic level of protection, a simple solution allowing each instance to maintain its own states might be adequate.

However, if maintaining accurate rate limiting is a core requirement - for example, if we want to ensure fair usage of resources across all users or we have to adhere to strict rate limit policies for compliance reasons - we will need to consider more sophisticated strategies.

Centralized State Storage
One such strategy is centralized state storage. Here, instead of each rate limiter instance managing its own states, all instances interact with a central storage system to read and update states. This method does, however, come with many tradeoffs.

Translate the Node.js/Express backend to Python/Django

  • User views 100% Done
  • Auth views. 100% Done
  • Hashtag views. 100% Done
  • Search views 100% Done
  • Follower views 100% Done
  • Report views 100% Done
  • Add Redis Database 100% Done
  • Post views. 100% Done
  • Comment views. 100% Done
  • Notification views 100% Done
  • #55 10% Done
  • #37 0% Done
  • #60 10% Done
  • #56 0% Done
  • Configure Django with PostGreSQL 100% Done
  • #57 10% Done
  • #64 100% Done

Overal Progress: 60%

Error when running npm run db:generate

In /express/ dir
When running the command
npm run db:generate 100
Running the command npm run db:resetTables before db:generate has a higher chance of the error occurring.

There occurs an error. (see error below).
This error does not happen all the time, only occasionally.
The error does not affect the entire app. And it does not affect the entire generation process.
It only affects less followers being generated.

The error occurs in the code in:
/express/models/UserFollowing.js
in the static async generate(n) function.

UPDATE:
This error probably happens because the same combination of user_id and follow_id is being generated.
The current code only prevents the same user_id and follow_id from being generated in a single entry.
But not duplicate entries.

          follow_id: '09d01dfd-e696-4b40-aef7-af03d7e8d6ac'
        },
        _previousDataValues: { user_id: undefined, follow_id: undefined },
        uniqno: 1,
        _changed: Set(2) { 'user_id', 'follow_id' },
        _options: {
          isNewRecord: true,
          _schema: null,
          _schemaDelimiter: '',
          attributes: undefined,
          include: undefined,
          raw: undefined,
          silent: undefined
        },
        isNewRecord: true
      },
      validatorKey: 'not_unique',
      validatorName: null,
      validatorArgs: []
    },
    ValidationErrorItem {
      message: 'follow_id must be unique',
      type: 'unique violation',
      path: 'follow_id',
      value: '09d01dfd-e696-4b40-aef7-af03d7e8d6ac',
      origin: 'DB',
      instance: UserFollowing {
        dataValues: {
          user_id: '79d86b47-acd1-4752-aeb1-81826ebe3549',
          follow_id: '09d01dfd-e696-4b40-aef7-af03d7e8d6ac'
        },
        _previousDataValues: { user_id: undefined, follow_id: undefined },
        uniqno: 1,
        _changed: Set(2) { 'user_id', 'follow_id' },
        _options: {
          isNewRecord: true,
          _schema: null,
          _schemaDelimiter: '',
          attributes: undefined,
          include: undefined,
          raw: undefined,
          silent: undefined
        },
        isNewRecord: true
      },
      validatorKey: 'not_unique',
      validatorName: null,
      validatorArgs: []
    }
  ],
  parent: error: duplicate key value violates unique constraint "users_following_pkey"
      at Parser.parseErrorMessage (/home/notaspoon/programming/collab/jasma/express/node_modules/pg-protocol/dist/parser.js:287:98)
      at Parser.handlePacket (/home/notaspoon/programming/collab/jasma/express/node_modules/pg-protocol/dist/parser.js:126:29)
      at Parser.parse (/home/notaspoon/programming/collab/jasma/express/node_modules/pg-protocol/dist/parser.js:39:38)
      at Socket.<anonymous> (/home/notaspoon/programming/collab/jasma/express/node_modules/pg-protocol/dist/index.js:11:42)
      at Socket.emit (node:events:513:28)
      at addChunk (node:internal/streams/readable:324:12)
      at readableAddChunk (node:internal/streams/readable:297:9)
      at Readable.push (node:internal/streams/readable:234:10)
      at TCP.onStreamRead (node:internal/stream_base_commons:190:23) {
    length: 299,
    severity: 'ERROR',
    code: '23505',
    detail: 'Key (user_id, follow_id)=(79d86b47-acd1-4752-aeb1-81826ebe3549, 09d01dfd-e696-4b40-aef7-af03d7e8d6ac) already exists.',
    hint: undefined,
    position: undefined,
    internalPosition: undefined,
    internalQuery: undefined,
    where: undefined,
    schema: 'public',
    table: 'users_following',
    column: undefined,
    dataType: undefined,
    constraint: 'users_following_pkey',
    file: 'nbtinsert.c',
    line: '663',
    routine: '_bt_check_unique',
    sql: 'INSERT INTO "users_following" ("user_id","follow_id") VALUES ($1,$2) RETURNING "user_id","follow_id";',
    parameters: [
      '79d86b47-acd1-4752-aeb1-81826ebe3549',
      '09d01dfd-e696-4b40-aef7-af03d7e8d6ac'
    ]
  },
  original: error: duplicate key value violates unique constraint "users_following_pkey"
      at Parser.parseErrorMessage (/home/notaspoon/programming/collab/jasma/express/node_modules/pg-protocol/dist/parser.js:287:98)
      at Parser.handlePacket (/home/notaspoon/programming/collab/jasma/express/node_modules/pg-protocol/dist/parser.js:126:29)
      at Parser.parse (/home/notaspoon/programming/collab/jasma/express/node_modules/pg-protocol/dist/parser.js:39:38)
      at Socket.<anonymous> (/home/notaspoon/programming/collab/jasma/express/node_modules/pg-protocol/dist/index.js:11:42)
      at Socket.emit (node:events:513:28)
      at addChunk (node:internal/streams/readable:324:12)
      at readableAddChunk (node:internal/streams/readable:297:9)
      at Readable.push (node:internal/streams/readable:234:10)
      at TCP.onStreamRead (node:internal/stream_base_commons:190:23) {
    length: 299,
    severity: 'ERROR',
    code: '23505',
    detail: 'Key (user_id, follow_id)=(79d86b47-acd1-4752-aeb1-81826ebe3549, 09d01dfd-e696-4b40-aef7-af03d7e8d6ac) already exists.',
    hint: undefined,
    position: undefined,
    internalPosition: undefined,
    internalQuery: undefined,
    where: undefined,
    schema: 'public',
    table: 'users_following',
    column: undefined,
    dataType: undefined,
    constraint: 'users_following_pkey',
    file: 'nbtinsert.c',
    line: '663',
    routine: '_bt_check_unique',
    sql: 'INSERT INTO "users_following" ("user_id","follow_id") VALUES ($1,$2) RETURNING "user_id","follow_id";',
    parameters: [
      '79d86b47-acd1-4752-aeb1-81826ebe3549',
      '09d01dfd-e696-4b40-aef7-af03d7e8d6ac'
    ]
  },
  fields: {
    user_id: '79d86b47-acd1-4752-aeb1-81826ebe3549',
    follow_id: '09d01dfd-e696-4b40-aef7-af03d7e8d6ac'
  },
  sql: 'INSERT INTO "users_following" ("user_id","follow_id") VALUES ($1,$2) RETURNING "user_id","follow_id";'
}

Node.js v18.7.0

Allow a minimum of 1 hashtag and a max of 5 hashtags per post.

Every post is forced to have a at least 1 hashtag.
So that posts without hashtag(s) can not exist.
This will improve newsfeed algorithms.

Validation should be implemented on both frontend, and backend.

The max of 5 hashtags it to prevent hashtag spam.
Also each hashtag is a minimum of 3 characters and a max of 50 characters.
< 3 character word are useless and > 50 character words are practically sentences.

Road to MVP

Road to the MVP (Minimum Viable Product)

Minimum requirements to make JASMA live

  • #54
  • Good documentation
  • Paypal payment system.
  • Advertisement placement system.
  • Content moderation system.
  • E-mailing system:
    • Password reset.
    • Email notifications.
    • Email confirmation.
  • Notification system. See #40
  • Upload / Download / View video content.
  • Fix issues:

Refactor post views and test

Make sure the view functions in backend/api/views/post_views.py are converted to DRF. And integrated to what's been done so far in auth_views.py.

  • Caching is integrated for all post requests

  • Media handling is integrated for all post requests

  • Post related models are handled for all post requests

  • create_post is integrated => POST a single post

    • Hashtag get created if needed
    • File upload is working
    • Add following notification based on preferences
  • edit_post is integrated => PUT a single post

    • Hashtag get created/deleted if needed
    • File upload is working
    • Add following notification based on preferences
  • delete_post is integrated => DELETE a single post

    • Related orphan files get deleted
    • Related orphan hashtag get deleted
    • Related following notification gets deleted
  • get_user_posts is integrated => GET all post from a specified user

    • Must support limit argument
  • get_single_post is integrated => GET a single post from post_id

  • get_multiple_posts is integrated => GET posts from specified list of post_id

  • get_global_newsfeed => GET posts from specified post_type

    • Handles limit
  • get_newsfeed => GET posts from following, hashtags, limit

  • add_post_bookmark => POST a bookmark relative to a specified post_id

  • delete_post_bookmark => DELETE a bookmark relative to a specified post_id

  • get_bookmarked_posts => GET list of all bookmarked post

Test Django APIs

Test the Django APIs with the python request and PyUnit package.

Implement payment features (Paypal, Stripe)

Implement paypal or stripe payment features into the app.

Related:
https://github.com/steph-koopmanschap/jasma/blob/development/backend/api/views/payment_views.py

the bought credits should be stored in the
balance = models.DecimalField(max_digits=19, decimal_places=4, default=0, validators=[MinValueValidator(0)]) field in the User model in models.py

Transaction info should be stored in the Transaction model in models.py

https://github.com/steph-koopmanschap/jasma/blob/development/backend/api/models.py

Stripe Payment documentation
https://stripe.com/docs/payments/checkout/how-checkout-works

https://stripe.com/docs/payments/checkout/fulfill-orders

Paypal API documentation
https://developer.paypal.com/api/rest/

Postman API collection for Paypal
https://postman.com/paypal/collection/19024122-92a85d0e-51e7-47da-9f83-c45dcb1cdf24

Create notification system

  1. Define the types of notifications
  2. Design how to store notification data
  • Notifications should be deleted automatically after 48 hours??

  • Notifications could be stored in a Redis Hashes with the user ID as the key and the notification data as the value.

  • Redis Sorted Sets allow one to store multiple key-value pairs in a single Redis key, but with the added benefit of being sorted based on a score assigned to each value. Redis Sorted Sets can be used to store notification data for each user, with the timestamp of the notification as the score. This can make it easier to retrieve notifications in a specific order (such as most recent first) and to filter notifications by time range.

  • Use Redis Pub/Sub for real-time notifications: Redis Pub/Sub is a messaging system that allows you to publish messages to multiple subscribers in real-time. One can use Redis Pub/Sub to send notifications to users in real-time when new notifications are added to the Redis store.

  • Use Redis Expire to manage data retention: Redis Expire is a feature that allows you to set a TTL (time-to-live) for each key in Redis. One can use Redis Expire to automatically delete old notification data after a certain period of time, to manage data retention and keep your Redis store from becoming too large.

  1. Configure notification settings and how to store those settings.
  2. Create a notification API
  • Define the notification payload. What information is sent with each notification?
  1. Handle notifications in the app. When a user clicks on a notification they should be able to go to that event directly.
  2. Design the notification UI

End user testing with Selenium and PyUnit

Test the whole app by testing with Selenium and PyUnit

Test cases:

  • Login wrong email: Should give error
  • Login wrong password: Should give error
  • Login correctly: Should login.
  • Create a post with wrong file format: Should give an error
  • Create a post correctly: Post should appear.
  • Create a comment on a post: Comment should appear.
  • Report a post
  • Follow a user.
  • Unfollow a user.
  • Subscribe to hashtags
  • Unsubscribe from hashtag
  • Change profile picture.

When posting a comment without an image, the comment is not made

When posting a comment without an image, the comment is not made.
When posting a comment with an image, the comment IS made and there is no problem.

Steps to reproduce this behavior:

  • Login (register if you have no account yet)
  • Comment on a post with only text and no image
    (create a post or run npm run db:generate 100 in the /express/ directory.)

In the browser console the following response appears:

Object { success: false, message: "Post must include content" }

The server terminal logs the following:

field name post_id
field value bed5f54c-0d8e-4241-bf78-c60266bf660d
field name comment_text
field value test
field name context
field value comment
formdata on close [Object: null prototype] {
  assignedEntryId: 'ca027bb2-4515-473d-9158-0d3325f9c666',
  post_id: 'bed5f54c-0d8e-4241-bf78-c60266bf660d',
  comment_text: 'test',
  context: 'comment'
}
and ndext 1

Searching for posts crashes the server

Searching for hashtags crashes the server

In /express/controllers/search.js async function searchHashtags(keyword)

    ```jasma/express/node_modules/sequelize/lib/utils/sql.js:150
    throw new Error(`Positional replacement (?) ${replacementIndex} has no entry in the replacement map (replacements[${replacementIndex}] is undefined).`);
    Error: Positional replacement (?) 0 has no entry in the replacement map (replacements[0] is undefined).```

Line 14.
const resHashtags = await db.query(`SELECT * FROM posts_hashtags WHERE LOWER(hashtag) LIKE ?`, { replacements: [keyword] });

Implement feature: Edit posts and comments

At the moment posts and comments can not be edited yet.

Expected behavior:

  1. Authorized(logged in user) clicks on "Edit post" or "Edit comment".
  2. A modal pops up with an input box that contains the current text of the post/comment
  3. The user can edit the text of the post/comment.
  4. The user clicks on "Submit edit"
  5. The data sends to the server.
  6. The server updates the post/comment in the PostGreSQL database.
  7. Server sends a success / failure respond back to the user.
  8. Modal closes on success or displays error message upon failure.

Refactor user_views with test

Make sure all backend/api/views/user_views.py are converted to DRF. Make sure they are all tested and in working condition.

  • Image handling is done for the profile picture.

get_user

  • id, username, email and role should be returned automatically along all attributes in the query
  • ? Prevent private information from leaking

get_loggedin_user => This is a weird one. Is it necessary? I mean this could easily be merged with previous one.

  • Retrieve User Profile and User Notification Preferences

update_user

  • Reproduce present behavior

delete_user

  • Reproduce present behavior

change_user_role

  • Reproduce present behavior
  • !! Current user should have admin/mod rigth to change role!

get_profile_pic

  • Reproduce present behavior

upload_profile_pic

  • Reproduce present behavior

Deployment with HTTPS/SSL on Nginx not working

Nginx was setup as a reverse-proxy on ubuntu-server in VirtualBox in NAT network mode.
A static DNS was created in the network router mapping the domain jasmaserver.lan to the IP 192.168.254.151 which is the ubuntu-server machine in VirtualBox.
After creating a self-signed SSL certificate. The jasmaHTTPS.conf file was copied to /etc/nginx/sites-available and symlinked, and Nginx restarted.

When trying to reach https://jasmaserver.lan or https://192.168.254.151 in the browser nginx gave a 502 Bad Gateway response.
But when visiting http://jasmaserver.lan:3000 the jasma app loads up correctly without problems.
The command tail -f /var/log/nginx/error.log gave the following result:

2023/02/08 09:31:14 [error] 7830#7830: *1 SSL_do_handshake() failed (SSL: error:0A00010B:SSL routines::wrong version number) while SSL handshaking to upstream, client: 192.168.254.103, server: jasmaserver.lan, request: "GET / HTTP/1.1", upstream: "https://127.0.0.1:3000/", host: "192.168.254.151"

The ports 80, 443, 3000, and 5000 were open on the server.

The Node processes on ports 3000 and 5000 do not use HTTPS/SSL.
The nginx reverse proxy uses https.
So the proxy is from HTTPS to HTTP.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.