GithubHelp home page GithubHelp logo

hanelo / knn-traditional-knn-bayesian Goto Github PK

View Code? Open in Web Editor NEW
3.0 3.0 0.0 51 KB

Comparison btw KNN traditional and KNN bayesian while using the bayesian approach

License: MIT License

Jupyter Notebook 100.00%

knn-traditional-knn-bayesian's People

Contributors

hanelo avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar

knn-traditional-knn-bayesian's Issues

KNN traditional decluttering

the knn traditional file could be decluttered:

import numpy as np
from sklearn.datasets import load_iris
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score

def traditional_knn(x_train, y_train, x_test, k):
    knn = KNeighborsClassifier(n_neighbors=k)
    knn.fit(x_train, y_train)
    test_predictions = knn.predict(x_test)
    return test_predictions

def evaluate_classification(y_true, y_pred):
    accuracy = accuracy_score(y_true, y_pred)
    precision = precision_score(y_true, y_pred, average=None)
    recall = recall_score(y_true, y_pred, average=None)
    f1 = f1_score(y_true, y_pred, average=None)
    return accuracy, precision, recall, f1

# Load the Iris dataset
iris = load_iris()
X = iris.data
y = iris.target

# Set the value of k
k = 3

# Split the dataset into training and test sets
np.random.seed(42)  # Set seed for reproducibility
indices = np.random.permutation(len(X))
train_size = int(0.8 * len(X))

X_train, X_test = X[indices[:train_size]], X[indices[train_size:]]
y_train, y_test = y[indices[:train_size]], y[indices[train_size:]]

# Run traditional k-NN with MAP estimation
traditional_test_predictions = traditional_knn(X_train, y_train, X_test, k)

# Compute evaluation metrics for traditional k-NN
traditional_test_accuracy, traditional_test_precision, traditional_test_recall, traditional_test_f1_score = evaluate_classification(y_test, traditional_test_predictions)

print("Traditional k-NN Results:")
print(f"Test Accuracy: {traditional_test_accuracy}")
print(f"Test Precision: {traditional_test_precision}")
print(f"Test Recall: {traditional_test_recall}")
print(f"Test F1-score: {traditional_test_f1_score}")
  • introducted evaluate_classification function to calculate accuracy, precision, recall, and F1-score. makes code more readable.
  • set a seed (np.random.seed(42)) for reproducibility of random data shuffling.
  • now uses numpy.random.permutation for more efficient data splitting into training and test sets.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.