GithubHelp home page GithubHelp logo

benjamin-allevius / scanstatistics Goto Github PK

View Code? Open in Web Editor NEW
49.0 5.0 10.0 3.85 MB

An R package for space-time anomaly detection using scan statistics.

License: GNU General Public License v3.0

R 19.47% TeX 0.43% C++ 7.25% C 1.21% HTML 71.64%
scan-statistics statistics r cluster rcpp rcpparmadillo spatial spatio-temporal anomaly-detection

scanstatistics's Introduction

Build Status CRAN_Status_Badge CRAN downloads DOI

scanstatistics

An R package for space-time anomaly detection using scan statistics.

Installing the package

To install the latest (CRAN) release of this package, type the following:

install.packages("scanstatistics")

To install the development version of this package, type this instead:

devtools::install_github("benjak/scanstatistics", ref = "develop")

What are scan statistics?

Scan statistics are used to detect anomalous clusters in spatial or space-time data. The gist of the methodology, at least in this package, is this:

  1. Monitor one or more data streams at multiple locations over intervals of time.
  2. Form a set of space-time clusters, each consisting of (1) a collection of locations, and (2) an interval of time stretching from the present to some number of time periods in the past.
  3. For each cluster, compute a statistic based on both the observed and the expected responses. Report the clusters with the largest statistics.

Main functions

Scan statistics

  • scan_eb_poisson: computes the expectation-based Poisson scan statistic (Neill 2005).
  • scan_pb_poisson: computes the (population-based) space-time scan statistic (Kulldorff 2001).
  • scan_eb_negbin: computes the expectation-based negative binomial scan statistic (Tango et al. 2011).
  • scan_eb_zip: computes the expectation-based zero-inflated Poisson scan statistic (Allévius & Höhle 2017).
  • scan_permutation: computes the space-time permutation scan statistic (Kulldorff et al. 2005).
  • scan_bayes_negbin: computes the Bayesian Spatial scan statistic (Neill 2006), extended to a space-time setting.

Zone creation

  • knn_zones: Creates a set of spatial zones (groups of locations) to scan for anomalies. Input is a matrix in which rows are the enumerated locations, and columns the k nearest neighbors. To create such a matrix, the following two functions are useful:
    • coords_to_knn: use stats::dist to get the k nearest neighbors of each location into a format usable by knn_zones.
    • dist_to_knn: use an already computed distance matrix to get the k nearest neighbors of each location into a format usable by knn_zones.
  • flexible_zones: An alternative to knn_zones that uses the adjacency structure of locations to create a richer set of zones. The additional input is an adjacency matrix, but otherwise works as knn_zones.

Miscellaneous

  • score_locations: Score each location by how likely it is to have an ongoing anomaly in it. This score is heuristically motivated.
  • top_clusters: Get the top k space-time clusters, either overlapping or non-overlapping in the spatial dimension.
  • df_to_matrix: Convert a data frame with data for each location and time point to a matrix with locations along the column dimension and time along the row dimension, with the selected data as values.

Example: Brain cancer in New Mexico

To demonstrate the scan statistics in this package, we will use a dataset of the annual number of brain cancer cases in the counties of New Mexico, for the years 1973-1991. This data was studied by Kulldorff (1998), who detected a cluster of cancer cases in the counties Los Alamos and Santa Fe during the years 1986-1989, though the excess of brain cancer in this cluster was not deemed statistically significant. The data originally comes from the package rsatscan, which provides an interface to the program SaTScan, but it has been aggregated and extended for the scanstatistics package.

To get familiar with the counties of New Mexico, we begin by plotting them on a map using the data frames NM_map and NM_geo supplied by the scanstatistics package:

library(scanstatistics)
library(ggplot2)

# Load map data
data(NM_map)
data(NM_geo)

# Plot map with labels at centroids
ggplot() + 
  geom_polygon(data = NM_map,
               mapping = aes(x = long, y = lat, group = group),
               color = "grey", fill = "white") +
  geom_text(data = NM_geo, 
            mapping = aes(x = center_long, y = center_lat, label = county)) +
  ggtitle("Counties of New Mexico")

We can further obtain the yearly number of cases and the population for each country for the years 1973-1991 from the data table NM_popcas provided by the package:

data(NM_popcas)
head(NM_popcas)
#>   year     county population count
#> 1 1973 bernalillo     353813    16
#> 2 1974 bernalillo     357520    16
#> 3 1975 bernalillo     368166    16
#> 4 1976 bernalillo     378483    16
#> 5 1977 bernalillo     388471    15
#> 6 1978 bernalillo     398130    18

It should be noted that Cibola county was split from Valencia county in 1981, and cases in Cibola have been counted to Valencia in the data.

A scan statistic for Poisson data

The Poisson distribution is a natural first option when dealing with count data. The scanstatistics package provides the two functions scan_eb_poisson and scan_pb_poisson with this distributional assumption. The first is an expectation-based[1] scan statistic for univariate Poisson-distributed data proposed by Neill et al. (2005), and we focus on this one in the example below. The second scan statistic is the population-based scan statistic proposed by Kulldorff (2001).

Using the Poisson scan statistic

The first argument to any of the scan statistics in this package should be a matrix (or array) of observed counts, whether they be integer counts or real-valued "counts". In such a matrix, the columns should represent locations and the rows the time intervals, ordered chronologically from the earliest interval in the first row to the most recent in the last. In this example we would like to detect a potential cluster of brain cancer in the counties of New Mexico during the years 1986-1989, so we begin by retrieving the count and population data from that period and reshaping them to a matrix using the helper function df_to_matrix:

library(dplyr)
#> 
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#> 
#>     filter, lag
#> The following objects are masked from 'package:base':
#> 
#>     intersect, setdiff, setequal, union
counts <- NM_popcas %>% 
  filter(year >= 1986 & year < 1990) %>%
  df_to_matrix(time_col = "year", location_col = "county", value_col = "count")

Spatial zones

The second argument to scan_eb_poisson should be a list of integer vectors, each such vector being a zone, which is the name for the spatial component of a potential outbreak cluster. Such a zone consists of one or more locations grouped together according to their similarity across features, and each location is numbered as the corresponding column index of the counts matrix above (indexing starts at 1).

In this example, the locations are the counties of New Mexico and the features are the coordinates of the county seats. These are made available in the data table NM_geo. Similarity will be measured using the geographical distance between the seats of the counties, taking into account the curvature of the earth. A distance matrix is calculated using the spDists function from the sp package, which is then passed to dist_to_knn and on to knn_zones:

library(sp)
library(magrittr)

# Remove Cibola since cases have been counted towards Valencia. Ideally, this
# should be accounted for when creating the zones.
zones <- NM_geo %>%
  filter(county != "cibola") %>%
  select(seat_long, seat_lat) %>%
  as.matrix %>%
  spDists(x = ., y = ., longlat = TRUE) %>%
  dist_to_knn(k = 15) %>%
  knn_zones

Baselines

The advantage of expectation-based scan statistics is that parameters such as the expected value can be modelled and estimated from past data e.g. by some form of regression. For the expectation-based Poisson scan statistic, we can use a (very simple) Poisson GLM to estimate the expected value of the count in each county and year, accounting for the different populations in each region. Similar to the counts argument, the expected values should be passed as a matrix to the scan_eb_poisson function:

mod <- glm(count ~ offset(log(population)) + 1 + I(year - 1985),
           family = poisson(link = "log"),
           data = NM_popcas %>% filter(year < 1986))

ebp_baselines <- NM_popcas %>% 
  filter(year >= 1986 & year < 1990) %>%
  mutate(mu = predict(mod, newdata = ., type = "response")) %>%
  df_to_matrix(value_col = "mu")

Note that the population numbers are (perhaps poorly) interpolated from the censuses conducted in 1973, 1982, and 1991.

Calculation

We can now calculate the Poisson scan statistic. To give us more confidence in our detection results, we will perform 999 Monte Carlo replications, by which data is generated using the parameters from the null hypothesis and a new scan statistic calculated. This is then summarized in a P-value, calculated as the proportion of times the replicated scan statistics exceeded the observed one. The output of scan_poisson is an object of class "scanstatistic", which comes with the print method seen below.

set.seed(1)
poisson_result <- scan_eb_poisson(counts = counts, 
                                  zones = zones, 
                                  baselines = ebp_baselines,
                                  n_mcsim = 999)
print(poisson_result)
#> Data distribution:                Poisson
#> Type of scan statistic:           expectation-based
#> Setting:                          univariate
#> Number of locations considered:   32
#> Maximum duration considered:      4
#> Number of spatial zones:          415
#> Number of Monte Carlo replicates: 999
#> Monte Carlo P-value:              0.005
#> Gumbel P-value:                   0.004
#> Most likely event duration:       4
#> ID of locations in MLC:           15, 26

As we can see, the most likely cluster for an anomaly stretches from 1986-1989 and involves the locations numbered 15 and 26, which correspond to the counties

counties <- as.character(NM_geo$county)
counties[c(15, 26)]
[1] "losalamos" "santafe"  

These are the same counties detected by Kulldorff (1998), though their analysis was retrospective rather than prospective as ours was. Ours was also data dredging as we used the same study period with hopes of detecting the same cluster.

A heuristic score for locations

We can score each county according to how likely it is to be part of a cluster in a heuristic fashion using the function score_locations, and visualize the results on a heatmap as follows:

# Calculate scores and add column with county names
county_scores <- score_locations(poisson_result, zones)
county_scores %<>% mutate(county = factor(counties[-length(counties)], 
                                          levels = levels(NM_geo$county)))

# Create a table for plotting
score_map_df <- merge(NM_map, county_scores, by = "county", all.x = TRUE) %>%
  arrange(group, order)

# As noted before, Cibola county counts have been attributed to Valencia county
score_map_df[score_map_df$subregion == "cibola", ] %<>%
  mutate(relative_score = score_map_df %>% 
                          filter(subregion == "valencia") %>% 
                          select(relative_score) %>% 
                          .[[1]] %>% .[1])

ggplot() + 
  geom_polygon(data = score_map_df,
               mapping = aes(x = long, y = lat, group = group, 
                             fill = relative_score),
               color = "grey") +
  scale_fill_gradient(low = "#e5f5f9", high = "darkgreen",
                      guide = guide_colorbar(title = "Relative\nScore")) +
  geom_text(data = NM_geo, 
            mapping = aes(x = center_long, y = center_lat, label = county),
            alpha = 0.5) +
  ggtitle("County scores")

A warning though: the score_locations function can be quite slow for large data sets. This might change in future versions of the package.

Finding the top-scoring clusters

Finally, if we want to know not just the most likely cluster, but say the five top-scoring space-time clusters, we can use the function top_clusters. The clusters returned can either be overlapping or non-overlapping in the spatial dimension, according to our liking.

top5 <- top_clusters(poisson_result, zones, k = 5, overlapping = FALSE)

# Find the counties corresponding to the spatial zones of the 5 clusters.
top5_counties <- top5$zone %>%
  purrr::map(get_zone, zones = zones) %>%
  purrr::map(function(x) counties[x])

# Add the counties corresponding to the zones as a column
top5 %<>% mutate(counties = top5_counties)

The top_clusters function includes Monte Carlo and Gumbel P-values for each cluster. These P-values are conservative, since secondary clusters from the original data are compared to the most likely clusters from the replicate data sets.

Concluding remarks

Other univariate scan statistics can be calculated practically in the same way as above, though the distribution parameters need to be adapted for each scan statistic.

Feedback

If you think this package lacks some functionality, or that something needs better documentation, I happily accept feedback either here at GitHub or via email at [email protected]. I'm also very interested in applying the methods in this package (current and future) to new problems, so if you know of any suitable public datasets, please tell me! A dataset with a multivariate response (e.g. multiple counter variables) would be of particular interest for some of the scan statistics that will appear in future versions of the package.

References

Allévius, B., M. Höhle (2017): An expectation-based space-time scan statistic for ZIP-distributed data, (under review).

Kleinman, K. (2015): Rsatscan: Tools, Classes, and Methods for Interfacing with SaTScan Stand-Alone Software, https://CRAN.R-project.org/package=rsatscan.

Kulldorff, M., Athas, W. F., Feuer, E. J., Miller, B. A., Key, C. R. (1998): Evaluating Cluster Alarms: A Space-Time Scan Statistic and Brain Cancer in Los Alamos, American Journal of Public Health 88 (9), 1377–80.

Kulldorff, M. (2001), Prospective time periodic geographical disease surveillance using a scan statistic, Journal of the Royal Statistical Society, Series A (Statistics in Society), 164, 61–72.

Kulldorff, M., Heffernan, R., Hartman, J., Assunção, R. M., Mostashari, F. (2005): A space-time permutation scan statistic for disease outbreak detection, PLoS Medicine, 2 (3), 0216-0224.

Neill, D. B., Moore, A. W., Sabhnani, M., Daniel, K. (2005): Detection of Emerging Space-Time Clusters, In Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, 218–27. ACM.

Neill, D. B., Moore, A. W., Cooper, G. F. (2006): A Bayesian Spatial Scan Statistic, Advances in Neural Information Processing Systems 18: Proceedings of the 2005 Conference.

Tango, T., Takahashi, K. Kohriyama, K. (2011), A Space-Time Scan Statistic for Detecting Emerging Outbreaks, Biometrics 67 (1), 106–15.

[1] Expectation-based scan statistics use past non-anomalous data to estimate distribution parameters, and then compares observed cluster counts from the time period of interest to these estimates. In contrast, population-based scan statistics compare counts in a cluster to those outside, only using data from the period of interest, and does so conditional on the observed total count.

scanstatistics's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

scanstatistics's Issues

scan_permutation and scan_eb_poisson

Hi Benjamin,

I was trying to us both of these functions in the scanstatistics package in a jupyter R notebook. It crashed the kernel for scan_eb_poisson (small sample based on simulated data and NMgeo), and was a long running operation for scan_permutations. Any ideas. 16GB RAM, 4core CPU, Windows 10.
``
Warning message in seq_len(nrow(x)):
"first element used of 'length.out' argument"

Error in seq_len(nrow(x)): argument must be coercible to non-negative integer
Traceback:

  1. scan_permutation(counts = counts2, zones = zones, population = NULL,
    . n_mcsim = 1, max_only = TRUE)
  2. flipud(population)
  3. rev(seq_len(nrow(x)))
    #' @Keywords internal
    flipud <- function(x) {
    x[rev(seq_len(nrow(x))), , drop = FALSE]

counts2.zip

Index out of bounds in Scan statistics

Dear Ben,

I'm pretty new to scanstatistics. When I was trying to run functions in scan statistics, I got this error:
error: Mat::elem(): index out of bounds
Error in (function (counts, baselines, zones, zone_lengths, store_everything, :
Mat::elem(): index out of bounds

Could you please help me out?

Thanks.

Best,
Lusi

Why did the function scan_pb_poisson give different results than SatScan?

Hi Benjamin,
I used the scan_pb_poisson to conduct space-time analysis with my dataset and I found that the results given by scan_pb_poisson and the result given by the software SatScan were quite different.
My dataset is a day-frequency disease counts data, range from 2020/12/31 to 2021/4/14.It contains 10 locations with latitude and longitude.
For SatScan,here is the settings:
[Input]
Time precision : Day
Coordinates : Lat/Long
[Analysis]
Type of Analysis : Space-Time
Probability Model : Poissson
Scan For Area With : High rates
And for scan_pb_possion,here is my code :
`counts = SZ_counts %>%
df_to_matrix(time_col = "time", location_col = "region", value_col = "count")
population = SZ_counts %>%
df_to_matrix(time_col = "time", location_col = "region", value_col = "population")
zones = SZ_geo %>%
select(long, lat) %>%
as.matrix %>%
spDists(x = ., y = ., longlat = TRUE) %>%
dist_to_knn(k = 4) %>%
knn_zones
regions = as.character(SZ_geo$region)
result = data.frame()
newcounts = counts
newpopulation = population
poisson_result = scan_pb_poisson(counts = newcounts,
zones = zones,
population = newpopulation,
n_mcsim = 999)
topclusters = top_clusters(poisson_result, zones, k = 10, overlapping = FALSE)

                      top_regions = topclusters$zone %>%
                        purrr::map(get_zone, zones = zones) %>%
                        purrr::map(function(x) regions[x])
                      
                      new_top_regions = c()
                      for (j in 1:length(top_regions)) {
                        new_top_regions[j] = paste(top_regions[[j]], collapse = ',')
                      }
                      
                      topclusters$zonename = new_top_regions
                      topclusters$endtime = rownames(population)[53]
                      result = rbind(result, topclusters)`

    For the same dataset, the SaTScan gave following results:
              1.Location IDs included.: 5
                Coordinates / radius..: (22.726017 N, 114.254455 E) / 0 km
                Time frame............: 2021/2/21 to 2021/4/13
                Population............: 2508600
                Number of cases.......: 198
                Expected cases........: 43.90
                Annual cases / 100000.: 55.4
                Observed / expected...: 4.51
                Relative risk.........: 6.85
                Log likelihood ratio..: 174.120739
                P-value...............: < 0.00000000000000001
              
              2.Location IDs included.: 4, 6, 1
                Coordinates / radius..: (22.754466 N, 113.942560 E) / 22.26 km
                Time frame............: 2021/2/10 to 2021/4/2
                Population............: 6369300
                Number of cases.......: 22
                Expected cases........: 111.46
                Annual cases / 100000.: 2.4
                Observed / expected...: 0.20
                Relative risk.........: 0.16
                Log likelihood ratio..: 63.471627
                P-value...............: < 0.00000000000000001
              
              3.Location IDs included.: 3, 7, 8
                Coordinates / radius..: (22.528466 N, 114.061547 E) / 12.88 km
                Time frame............: 2021/1/1 to 2021/2/20
                Population............: 4265300
                Number of cases.......: 14
                Expected cases........: 73.21
                Annual cases / 100000.: 2.4
                Observed / expected...: 0.19
                Relative risk.........: 0.17
                Log likelihood ratio..: 40.022434
                P-value...............: 0.000000000000092

       While scan_pb_possion gave following results:
     zone duration           score        relrisk_in  relrisk_out          Gumbel_pvalue      zonename          endtime
       15     104         392.4982441   4.248194    0.2996086         0.0000000             5              2021/2/28
        13     104        329.5112571     3.428604   0.2993484         0.0000000             4,5       2021/2/28
     
      The 2 zones given by scan_pb_possion were totally different from the 3 clusters given by SaTScan.Why is that?

      In addition, the SatScan only gave one relative risk but scan_pb_possion give two risk:relrisk_in,relrisk_out.How could I match these results?

No longer on CRAN

I noticed that this package has dropped off of CRAN. Do you know if anyone is maintaining the package at the moment?
If not, I'd be interested in taking over maintainer duties for the package and working to get it back on CRAN.
I've found it very useful in my work, and would like to keep it easily accessible.

Thanks,
-Paul

top clusters

Hi Ben, earlier, I've tried to get top clusters using this syntax
top10 <- top_clusters(res, zones, k = 10, overlapping = FALSE)
top10

but the result (top10), all clusters have gumble p value = 0, and altough I set overlapping = FALSE, the result is still overlapping. And then when I read your updates to top clusters and documentation, the result of top clusters are different than first syntax and all of MLC p value is 0.01. Beside that, when I use the syntax for show subregion in top10 cluster in Flexible Zones, there was error
Error: object of type 'closure' is not subsettable
What should I do? Thank you very much

Here the First syntax
knn_mat <- coords_to_knn(unique(data[,6:7]), 12)
zones <- knn_zones(knn_mat)

t<-length(unique(data$year))
m<-length(unique(data$subregion))
counts<-matrix(data$case,nrow=t, ncol=m)
View(counts)
population<-matrix(data$population,nrow=t, ncol=m)

res <- scan_pb_poisson(counts = counts,
zones = zones,
population = population,
n_mcsim = 99,
max_only = FALSE)

res$MLC

hotspot<-unique(data$id)[res$MLC$locations]
hotspot

#TOP Cluster
top10 <- top_clusters(res, zones, k = 10, overlapping = FALSE)
top10

#show subregion in top10 cluster
j=1
clustersubregion<-list()
for(i in top10$zone){
clustersubregion[[j]]<-unique(data$id)[zones[[i]]]
j<-j+1
}
clustersubregion

Second Syntax
knn_mat <- coords_to_knn(unique(data[,6:7]), 12)
zones <- knn_zones(knn_mat)

t<-length(unique(data$year))
m<-length(unique(data$subregion))
counts<-matrix(data$case,nrow=t, ncol=m)
#View(counts)#
population<-matrix(data$population,nrow=t, ncol=m)

res <- scan_pb_poisson(counts = counts,
zones = zones,
population = population,
n_mcsim = 99,
max_only = FALSE)

res$MLC

hotspot<-unique(data$id)[res$MLC$locations]
hotspot

#tOP CLUSTER P VALUE
mc_pvalue <- function(observed, replicates) {
if (length(replicates) == 0) {
return(NULL)
} else {
f <- Vectorize(
function(y) {
(1 + sum(replicates > y)) / (1 + length(replicates))
}
)

return(f(observed))

}
}

gumbel_pvalue <- function(observed, replicates, method = "ML", ...) {
if (length(replicates) < 2) {
stop("Need at least 2 observations to fit Gumbel distribution.")
}

Fit Gumbel distribution to Monte Carlo replicates

gumbel_mu <- NA
gumbel_sigma <- NA
if (method == "ML") {
gum_fit <- gum.fit(replicates, show = FALSE, ...)
gumbel_mu <- gum_fit$mle[1]
gumbel_sigma <- gum_fit$mle[2]
} else {
gumbel_sigma <- sqrt(6 * var(replicates) / pi^2)
gumbel_mu <- mean(replicates) + digamma(1) * gumbel_sigma
}

pvalue <- pgumbel(observed, gumbel_mu, gumbel_sigma, lower.tail = FALSE)

return(list(pvalue = pvalue,
gumbel_mu = gumbel_mu,
gumbel_sigma = gumbel_sigma))
}

mtop_clusters <- function(x, zones, k = 10, overlapping = FALSE, gumbel = FALSE,
alpha = NULL, ...) {
k <- min(k, nrow(x$observed))
if (overlapping) {
return(x$observed[seq_len(k), ])
} else {
row_idx <- c(1L, integer(k - 1))
seen_locations <- zones[[x$observed[1,]$zone]]
n_added <- 1L
i <- 2L
while (n_added < k && i <= nrow(x$observed)) {
zone <- x$observed[i, ]$zone
if (zone != x$observed[i-1, ]$zone &&
length(intersect(seen_locations, zones[[zone]])) == 0) {
seen_locations <- c(seen_locations, zones[[zone]])
n_added <- n_added + 1L
row_idx[n_added] <- i
}
i <- i + 1L
}
res <- x$observed[row_idx[row_idx > 0], ]

if (nrow(x$replicates) > 0) {
  res$MC_pvalue <- mc_pvalue(res$score, x$replicates$score)
  
  if (gumbel) {
    res$Gumbel_pvalue <- gumbel_pvalue(res$score, 
                                       x$replicates$score)$pvalue
  }
  if (!is.null(alpha) && alpha >= 0 && alpha <= 1) {
    res$critical_value <- quantile(x$replicates$score, 1 - alpha)
  }
}
return(res)

}
}

top10 <- mtop_clusters(res, zones, k = 10, overlapping = FALSE, gumbel=FALSE,alpha=0.05)
top10

#show subregion in top10 cluster
j=1
clustersubregion<-list()
for(i in top10$zone){
clustersubregion[[j]]<-unique(data$id)[zones[[i]]]
j<-j+1
}
clustersubregion

1.0.2 release?

you added functionality after 1.0.1 release, for us most relevantly in

30b424c

but did not yet do a release including it. is it planned?

Missing square root in scores for scan_eb_negbin?

I've been digging into this package, and I noticed that you're using the formula sum((y - m) / w) / sum(m / w) to calculate "hotspot" scores for the scan_eb_negbin function, but Tango et al. (2011) use the formula sum((y-m) / w) / sqrt(sum(m / w)). Was there a deliberate reason for this change or is this a bug?

Thanks,
-Paul

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.