GithubHelp home page GithubHelp logo

olahallengren / sql-server-maintenance-solution Goto Github PK

View Code? Open in Web Editor NEW
2.8K 2.8K 727.0 1.6 MB

SQL Server Maintenance Solution

Home Page: https://ola.hallengren.com

License: MIT License

TSQL 100.00%
sqlserver

sql-server-maintenance-solution's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sql-server-maintenance-solution's Issues

IndexOptimize tries to reorganize index with an invalid argument

The new version of IndexOptimize (found in commit 8ae5950) sends the argument RESUMABLE to the alter index reorganize command, which is not supported.

https://docs.microsoft.com/en-us/sql/t-sql/statements/alter-index-transact-sql?view=sql-server-2017#syntax

This causes an error:

Msg 50000, Level 16, State 1, Procedure dbo.CommandExecute, Line 163 [Batch Start Line 0]
Msg 155, 'RESUMABLE' is not a recognized ALTER INDEX REORGANIZE option.
Date and time: 2018-06-11 07:49:06
Server: sgsatusql
Version: 12.0.2000.8
Edition: SQL Azure
Platform: Windows
Procedure: [SgSatuDb].[dbo].[IndexOptimize]
Parameters: @Databases = 'sgSatuDb', @FragmentationLow = NULL, @FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE', @FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE', @FragmentationLevel1 = 5, @FragmentationLevel2 = 30, @MinNumberOfPages = 1000, @MaxNumberOfPages = NULL, @SortInTempdb = 'N', @MaxDOP = NULL, @FillFactor = NULL, @PadIndex = NULL, @LOBCompaction = 'Y', @UpdateStatistics = NULL, @OnlyModifiedStatistics = 'N', @StatisticsSample = NULL, @StatisticsResample = 'N', @PartitionLevel = 'Y', @MSShippedObjects = 'N', @Indexes = '%.dwData.%, %.dw.%', @TimeLimit = NULL, @Delay = NULL, @WaitAtLowPriorityMaxDuration = NULL, @WaitAtLowPriorityAbortAfterWait = NULL, @Resumable = 'N', @AvailabilityGroups = NULL, @LockTimeout = NULL, @LogToTable = 'Y', @Execute = 'N'
Version: 2018-06-10 16:09:01
Source: https://ola.hallengren.com
 
Date and time: 2018-06-11 07:49:06
Database: [SgSatuDb]
Status: ONLINE
Standby: No
Updateability: READ_WRITE
User access: MULTI_USER
Recovery model: FULL
 
Date and time: 2018-06-11 07:49:06
Command: ALTER INDEX [PK_H_Palkkatiliointi] ON [SgSatuDb].[dwData].[H_Palkkatiliointi] REORGANIZE WITH (RESUMABLE = OFF, LOB_COMPACTION = ON)
Comment: ObjectType: Table, IndexType: Clustered, ImageText: No, NewLOB: No, FileStream: No, ColumnStore: No, AllowPageLocks: Yes, PageCount: 102425, Fragmentation: 28.5018
Outcome: Not Executed
Duration: 00:00:00
Date and time: 2018-06-11 07:49:06
 
Date and time: 2018-06-11 07:49:07
Command: ALTER INDEX [I_H_TyontekijanEsimies_Avain] ON [SgSatuDb].[dwData].[H_TyontekijanEsimies] REORGANIZE WITH (RESUMABLE = OFF, LOB_COMPACTION = ON)
Comment: ObjectType: Table, IndexType: NonClustered, ImageText: No, NewLOB: No, FileStream: No, ColumnStore: No, AllowPageLocks: Yes, PageCount: 1266, Fragmentation: 19.4313
Outcome: Not Executed
Duration: 00:00:00
Date and time: 2018-06-11 07:49:07
 
Date and time: 2018-06-11 07:49:07

Create config tables

We use MSX for our environment having a config table so we can customize the jobs per server would be nice.

Needed transaction log backups can be deleted

About a week ago we had a system that could not be fully restored. We had a full backup from roughly 48 hours before we attempted the restore. The CleanupTime parameter for the transaction log backups was set to 24.

In the script it looks like the database_backup_lsn is being used to determine the last valid transaction log which can be cleaned up. I'm not sure that's the right value to check though.

Based on what I've been able to read on the LSN (I am by no means an expert), the full backup's last_lsn should fall between the first log's first_lsn and last_lsn. Or... log.first_lsn <= full.last_lsn &&log.last_lsn > full.last_lsn.

In our case that log file was about 10 files too early and was deleted out. That lines up with our problem scenario - SQL errored out saying the file we were trying to restore was too new. I've got the output of the backup metadata if that would be helpful but unfortunately I don't have the output of the errors.

My guess is this isn't normally an issue. This server has an exceptionally long full backup time and it may have thrown off the order from the expected. But if it can happen here then it might happen somewhere else too. And if you need to recover to full + logs instead of full + diff + logs that's a problem.

tl;dr
Change the CleanupTime safety check to use last_lsn instead of database_backup_lsn.

DatabaseBackup - configurable folder names

Current behavior: backup file/folder names are hard-coded in this style:

SET @CurrentFilePath = @CurrentDirectoryPath + '\' + CASE WHEN @CurrentAvailabilityGroup IS NOT NULL THEN @Cluster + '$' + @CurrentAvailabilityGroup ELSE REPLACE(CAST(SERVERPROPERTY('servername') AS nvarchar(max)),'\','$') END + '_' + @CurrentDatabaseNameFS + '_' + UPPER(@CurrentBackupType) + CASE WHEN @ReadWriteFileGroups = 'Y' THEN '_PARTIAL' ELSE '' END + CASE WHEN @CopyOnly = 'Y' THEN '_COPY_ONLY' ELSE '' END + '_' + REPLACE(REPLACE(REPLACE((CONVERT(nvarchar,@CurrentDate,120)),'-',''),' ','_'),':','') + CASE WHEN @NumberOfFiles > 1 AND @NumberOfFiles <= 9 THEN '_' + CAST(@CurrentFileNumber AS nvarchar) WHEN @NumberOfFiles >= 10 THEN '_' + RIGHT('0' + CAST(@CurrentFileNumber AS nvarchar),2) ELSE '' END + '.' + @CurrentFileExtension

The server name, cluster, and Availability Group parts are hard-coded.

We need to be able to predict the backup folder path on this project in order to let multiple servers access the same set of backups, and keep backups in the same folder as we fail around from one server to another. (For more details, check out the advanced architecture diagram at the bottom of sp_AllNightLog's documentation.)

Proposed Solution

  • Don't add any new parameters (to keep things simple)
  • Don't touch file names (I don't need that, but if somebody else does, they can code it)
  • Use the existing @Directory NVARCHAR(MAX) parameter
  • If the @Directory parameter has '**' anywhere in it, trigger the new behavior

New behavior:

  • Declare a new @DirectoryOverride internal parameter (not visible to users, only inside the proc):
  • Set @DirectoryOverride to everything to the right of the first **. For example, if they passed in @Directory = 'C:\TEMP\**SERVERNAME**\**DATABASENAME**', then set @DirectoryOverride to '**SERVERNAME**\**DATABASENAME**'
  • Remove everything ** and after in @Directory. So for example, if they passed in @Directory = 'C:\TEMP\**SERVERNAME**\**DATABASENAME**', then set @Directory to 'C:\TEMP\'
  • In the loop of doing backups, if @DirectoryOverride is not null, create the necessary directories based on @DirectoryOverride
      IF @DirectoryOverride IS NULL
      BEGIN
        SET @CurrentDirectoryOverride = CASE WHEN @CurrentAvailabilityGroup IS NOT NULL THEN @Cluster + '$' + @CurrentAvailabilityGroup ELSE REPLACE(CAST(SERVERPROPERTY('servername') AS nvarchar(max)),'\','$') END + '\' + @CurrentDatabaseNameFS + '\' + UPPER(@CurrentBackupType);
      END
      ELSE /* IF @DirectoryOverride IS NULL */
      BEGIN
        SET @CurrentDirectoryOverride = @DirectoryOverride;
        SET @CurrentDirectoryOverride = REPLACE(@CurrentDirectoryOverride, '**CLUSTER**', COALESCE(@Cluster,''));
        SET @CurrentDirectoryOverride = REPLACE(@CurrentDirectoryOverride, '**AVAILABILITYGROUP**', COALESCE(@CurrentAvailabilityGroup,''));
        SET @CurrentDirectoryOverride = REPLACE(@CurrentDirectoryOverride, '**SERVERNAME**', REPLACE(CAST(SERVERPROPERTY('servername') AS nvarchar(max)),'\','$'));
        IF CHARINDEX('\',CAST(SERVERPROPERTY('servername') AS nvarchar(max))) > 0
        BEGIN
            SET @CurrentDirectoryOverride = REPLACE(@CurrentDirectoryOverride, '**SERVERNAMEWITHOUTINSTANCE**', SUBSTRING(CAST(SERVERPROPERTY('servername') AS nvarchar(max)), 1, (CHARINDEX('\',CAST(SERVERPROPERTY('servername') AS nvarchar(max))) - 1)));
            SET @CurrentDirectoryOverride = REPLACE(@CurrentDirectoryOverride, '**INSTANCENAME**', SUBSTRING(CAST(SERVERPROPERTY('servername') AS nvarchar(max)), CHARINDEX('\',CAST(SERVERPROPERTY('servername') AS nvarchar(max))), (LEN(CAST(SERVERPROPERTY('servername') AS nvarchar(max))) - CHARINDEX('\',CAST(SERVERPROPERTY('servername') AS nvarchar(max)))) + 1));
        END
        ELSE /* No instance installed */
        BEGIN
            SET @CurrentDirectoryOverride = REPLACE(@CurrentDirectoryOverride, '**SERVERNAMEWITHOUTINSTANCE**', CAST(SERVERPROPERTY('servername') AS nvarchar(max)));
            SET @CurrentDirectoryOverride = REPLACE(@CurrentDirectoryOverride, '**INSTANCENAME**', 'DEFAULT');
        END
        SET @CurrentDirectoryOverride = REPLACE(@CurrentDirectoryOverride, '**DATABASENAME**', @CurrentDatabaseNameFS);
        SET @CurrentDirectoryOverride = REPLACE(@CurrentDirectoryOverride, '**BACKUPTYPE**', UPPER(@CurrentBackupType));
      END /* IF @DirectoryOverride IS NOT NULL */

Things we won't replace in this code:

  • CurrentDate - that would have to be replaced at every database backup command since it changes throughout the execution as time marches ever forward
  • Read/Write filegroups
  • Number of files

Why Use Asterisks?

Because the asterisk * isn't allowed in Windows or Linux folder names.) I was originally going to use forward slashes, but it turns out there's known bugs in the BACKUP and RESTORE command (MS's, not Ola's) with those. For example, BACKUP simply discards forward slashes, so someone might have had //SERVERNAME// in their backups already, working despite themselves, and this new code would have broken.

The current DatabaseBackup behavior is to error out if @Directory has '**' - try running these:

EXECUTE [dbo].[DatabaseBackup] @Databases = 'USER_DATABASES', @Directory = N'C:\**TEMP**\',
 @BackupType = 'FULL', @Verify = 'Y', @CleanupTime = NULL, @CheckSum = 'Y', @LogToTable = 'Y'

EXECUTE [dbo].[DatabaseBackup] @Databases = 'USER_DATABASES', @Directory = N'C:\TEMP\**SERVERNAME**',
 @BackupType = 'FULL', @Verify = 'Y', @CleanupTime = NULL, @CheckSum = 'Y', @LogToTable = 'Y'

You get errors like:

Msg 50000, Level 16, State 1, Procedure DatabaseBackup, Line 621 [Batch Start Line 11]
The directory C:*TEMP*\ does not exist.

Msg 50000, Level 16, State 1, Procedure DatabaseBackup, Line 621 [Batch Start Line 11]
The directory C:\TEMP*SERVERNAME* does not exist.

Work Required

  • Implement & test the folder name replacement (draft done)
  • Implement & test @MirrorDirectory replacement (draft done)
  • Document the options (working on now)
  • Build input-validation code to make sure they type the params right (TBD)
  • Test standalone instance (done)
  • Test clustered instance
  • Test named instance (done)
  • Test on AG

Code in progress:
https://github.com/BrentOzarULTD/sql-server-maintenance-solution/blob/issue_14/brent/DatabaseBackup.sql

Install-All-Scripts missing objects

The module 'sp_DatabaseRestore' depends on the missing object 'dbo.CommandExecute'. The module will still be created; however, it cannot run successfully until the object exists.

The module 'sp_AllNightLog' depends on the missing object 'master.dbo.DatabaseBackup'. The module will still be created; however, it cannot run successfully until the object exists.

Option WITH INIT

Hi Ola,

can you implement please the parameter WITH INIT?

I need it, when i use the parameter:
@filename = '{DatabaseName}.{FileExtension}'

Thank you
Harald

Arithmetic overflow error converting expression to data type int

With the latest version (6/3/18) 98% of the index optimize works well.
However I started seeing this error:

Msg 8115, Level 16, State 2, Line 1
Arithmetic overflow error converting expression to data type int.

This is how I have been running the code in a sql job:

EXECUTE dbo.IndexOptimize
@databases = '(dbname list)'
,@FragmentationLow = NULL
,@FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE'
,@FragmentationHigh = 'INDEX_REBUILD_ONLINE'
,@FragmentationLevel1 = 25
,@FragmentationLevel2 = 45
,@SortInTempdb = 'Y'
,@FillFactor = 85
,@UpdateStatistics = 'ALL'
,@indexes = 'ALL_INDEXES'
,@LogToTable = 'Y'
,@execute = 'Y'

2017 Smart Backups

It would be nice to implement Smart differential and full backups for 2017. Here is a quote from the Tiger Team Log and how this works:

Smart Differential Backup – A new column modified_extent_page_count is introduced in sys.dm_db_file_space_usage to track differential changes in each database file of the database. The new column modified_extent_page_count will allow DBAs, SQL Community and backup ISVs to build smart backup solution which performs differential backup if percentage changed pages in the database is below a threshold (say 70-80%) else perform full database backup. With large number of changes in the database, cost and time to complete differential backup is similar to that of full database backup so there is no real benefit of taking differential backup in this case but it can rather increase the restore time of database. By adding this intelligence to the backup solutions, customers can now save on restore and recovery time while using differential backups.

Consider a scenario where you previously had a backup plan to take full database backup on weekends and differential backup daily. In this case, if the database is down on Friday, you will need to restore full db backup from Sunday, differential backups from Thursday and then T-log backups from Friday. By leveraging modified_extent_page_count in your backup solution, you can now take full database backup on Sunday and lets say by Wednesday, if 90% of pages have changed, the backup solution should take full database backup rather than differential backup. Now, if the database goes down on Friday, you can restore the full db backup from Wednesday, small differential backup from Thursday and T-log backups from Friday to restore and recover the database quickly compared to the previous scenario. This feature was requested by customers and community in connect item 511305.

USE
GO

select CAST(ROUND((modified_extent_page_count*100.0)/allocated_extent_page_count,2)
as decimal(9,2))
from sys.dm_db_file_space_usage
GO

select CAST(ROUND((SUM(modified_extent_page_count)*100.0)/SUM(allocated_extent_page_count),2)
as decimal(9,2))
as '% Differential Changes since last backup'
from sys.dm_db_file_space_usage

https://blogs.msdn.microsoft.com/sql_server_team/sql-server-community-driven-enhancements-in-sql-server-2017/

Stored procedure version comment and database extended property

Please allow it to be easy to identify what version of the maintenance solution is installed. For example, include the last updated date of the maintenance solution as a comment in the code of each procedure.
--// Source: https://ola.hallengren.com
can become
--// Source: https://ola.hallengren.com
--// Version: 2016-04-02

Code can then be used to the check the version is installed. The code can also add or update a database extended property with the version. If the incorrect version is installed, the value NULL can be stored in the extended property.

`USE [master] -- Specify the database in which the objects will be created.

SET NOCOUNT ON

DECLARE @CreateJobs nvarchar(max)
DECLARE @BackupDirectory nvarchar(max)
DECLARE @Cleanuptime int
DECLARE @OutputFileDirectory nvarchar(max)
DECLARE @LogToTable nvarchar(max)
DECLARE @Version numeric(18,10)
DECLARE @error int
DECLARE @LastUpdatedExtPropName sysname
DECLARE @LastUpdatedExtPropValue datetime

SET @CreateJobs = 'Y' -- Specify whether jobs should be created.
SET @BackupDirectory = N'C:\Backup' -- Specify the backup root directory.
SET @Cleanuptime = NULL -- Time in hours, after which backup files are deleted. If no time is specified, then no backup files are deleted.
SET @OutputFileDirectory = NULL -- Specify the output file directory. If no directory is specified, then the SQL Server error log directory is used.
SET @LogToTable = 'Y' -- Log commands to a table.

SET @error = 0
SET @LastUpdatedExtPropName = N'https://ola.hallengren.com last updated' -- set to NULL to disable setting the extended property
SET @LastUpdatedExtPropValue = '2018-04-02 00:00:00' -- set to NULL to disable stored procedure version check

SET @Version = CAST(LEFT(CAST(SERVERPROPERTY('ProductVersion') AS nvarchar(max)),CHARINDEX('.',CAST(SERVERPROPERTY('ProductVersion') AS nvarchar(max))) - 1) + '.' + REPLACE(RIGHT(CAST(SERVERPROPERTY('ProductVersion') AS nvarchar(max)), LEN(CAST(SERVERPROPERTY('ProductVersion') AS nvarchar(max))) - CHARINDEX('.',CAST(SERVERPROPERTY('ProductVersion') AS nvarchar(max)))),'.','') AS numeric(18,10))

IF IS_SRVROLEMEMBER('sysadmin') = 0
BEGIN
RAISERROR('You need to be a member of the SysAdmin server role to install the solution.',16,1)
SET @error = @@error
END

IF OBJECT_ID('tempdb..#Config') IS NOT NULL DROP TABLE #Config

CREATE TABLE #Config ([Name] nvarchar(max),
[Value] nvarchar(max))

IF @CreateJobs = 'Y' AND @OutputFileDirectory IS NULL AND SERVERPROPERTY('EngineEdition') <> 4 AND @Version < 12
BEGIN
IF @Version >= 11
BEGIN
SELECT @OutputFileDirectory = [path]
FROM sys.dm_os_server_diagnostics_log_configurations
END
ELSE
BEGIN
SELECT @OutputFileDirectory = LEFT(CAST(SERVERPROPERTY('ErrorLogFileName') AS nvarchar(max)),LEN(CAST(SERVERPROPERTY('ErrorLogFileName') AS nvarchar(max))) - CHARINDEX('',REVERSE(CAST(SERVERPROPERTY('ErrorLogFileName') AS nvarchar(max)))))
END
END

IF @CreateJobs = 'Y' AND RIGHT(@OutputFileDirectory,1) = '' AND SERVERPROPERTY('EngineEdition') <> 4
BEGIN
SET @OutputFileDirectory = LEFT(@OutputFileDirectory, LEN(@OutputFileDirectory) - 1)
END

INSERT INTO #Config ([Name], [Value])
VALUES('CreateJobs', @CreateJobs)

INSERT INTO #Config ([Name], [Value])
VALUES('BackupDirectory', @BackupDirectory)

INSERT INTO #Config ([Name], [Value])
VALUES('CleanupTime', @Cleanuptime)

INSERT INTO #Config ([Name], [Value])
VALUES('OutputFileDirectory', @OutputFileDirectory)

INSERT INTO #Config ([Name], [Value])
VALUES('LogToTable', @LogToTable)

INSERT INTO #Config ([Name], [Value])
VALUES('DatabaseName', DB_NAME(DB_ID()))

INSERT INTO #Config ([Name], [Value])
VALUES('Error', CAST(@error AS nvarchar))

INSERT INTO #Config ([Name], [Value])
VALUES('LastUpdatedExtPropName', @LastUpdatedExtPropName)

INSERT INTO #Config ([Name], [Value])
VALUES('LastUpdatedExtPropValue', CONVERT(varchar(23), @LastUpdatedExtPropValue, 120))
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

IF EXISTS(SELECT 1 FROM #Config WHERE Name = 'LastUpdatedExtPropValue' AND ISDATE(value) = 1)
BEGIN

DECLARE @currentversion datetime
DECLARE @error int
DECLARE @LastUpdatedExtPropName sysname
DECLARE @LastUpdatedExtPropValue datetime
DECLARE @rowcount int
DECLARE @text nvarchar(max)
DECLARE @tempver datetime
DECLARE @VersionExtPropValue sql_variant

SELECT @LastUpdatedExtPropName = CAST(Value as sysname)
FROM #Config
WHERE [Name] = 'LastUpdatedExtPropName'

SELECT @LastUpdatedExtPropValue = CAST(Value as datetime)
FROM #Config
WHERE [Name] = 'LastUpdatedExtPropValue'

SET @currentversion = @LastUpdatedExtPropValue

IF NOT @currentversion IS NULL
BEGIN
SET @text = OBJECT_DEFINITION (OBJECT_ID(N'[dbo].[CommandExecute]'))
SET @tempver = CASE CHARINDEX('--// Version: ', @text) WHEN 0 THEN NULL ELSE SUBSTRING(@text, 14 + CHARINDEX('--// Version: ', @text), 23) END
IF @tempver IS NULL OR @tempver <> @currentversion SET @currentversion = NULL
END

IF NOT @currentversion IS NULL
BEGIN
SET @text = OBJECT_DEFINITION (OBJECT_ID(N'[dbo].[DatabaseBackup]'))
SET @tempver = CASE CHARINDEX('--// Version: ', @text) WHEN 0 THEN NULL ELSE SUBSTRING(@text, 14 + CHARINDEX('--// Version: ', @text), 23) END
IF @tempver IS NULL OR @tempver <> @currentversion SET @currentversion = NULL
END

IF NOT @currentversion IS NULL
BEGIN
SET @text = OBJECT_DEFINITION (OBJECT_ID(N'[dbo].[DatabaseIntegrityCheck]'))
SET @tempver = CASE CHARINDEX('--// Version: ', @text) WHEN 0 THEN NULL ELSE SUBSTRING(@text, 14 + CHARINDEX('--// Version: ', @text), 23) END
IF @tempver IS NULL OR @tempver <> @currentversion SET @currentversion = NULL
END

IF NOT @currentversion IS NULL
BEGIN
SET @text = OBJECT_DEFINITION (OBJECT_ID(N'[dbo].[IndexOptimize]'))
SET @tempver = CASE CHARINDEX('--// Version: ', @text) WHEN 0 THEN NULL ELSE SUBSTRING(@text, 14 + CHARINDEX('--// Version: ', @text), 23) END
IF @tempver IS NULL OR @tempver <> @currentversion SET @currentversion = NULL
END

IF @currentversion IS NULL
BEGIN
RAISERROR('Warning: The maintenance stored procedures installed are not current.',16,1)
SET @error = @@error
END

IF LEN(@LastUpdatedExtPropName) > 0
BEGIN

SELECT @VersionExtPropValue = [value]
FROM sys.extended_properties
WHERE class = 0 AND major_id = 0 AND minor_id = 0 AND [name] = @LastUpdatedExtPropName;

SELECT @error = @@ERROR, @rowcount = @@ROWCOUNT

IF @rowcount = 0 
  EXEC sys.sp_addextendedproperty @name = @LastUpdatedExtPropName, @value = @CurrentVersion;
ELSE IF @VersionExtPropValue <> @CurrentVersion OR (@VersionExtPropValue IS NULL AND NOT @CurrentVersion IS NULL) OR (NOT @VersionExtPropValue IS NULL AND @CurrentVersion IS NULL) 
  EXEC sys.sp_updateextendedproperty @name = @LastUpdatedExtPropName, @value = @CurrentVersion;

END

END`

IndexOptimize: Heap (de)fragmentation

Feature request - adding the ability to identify and "fix" heap fragmentation.

I know a heap is usually a bad idea and that heap maintenance is a pain, but there are some valid use cases for heaps.

Backup primary Availability Group databases on Secondary Server

Hi, I have a SQL Server environment that has one primary and one secondary with auto failover.

Server1 -Primary
Server2 -Secondary

I want to create a backup job that should always backup from the primary server.
l configured this job on Server1, if my AG failover to the another server, the backup is not happening.

So my solution should be like this,

Go and check which is the primary server, then run the backup script against that server.

or,

Any ways to directly run this backup script on the listener?

@URL parameter and backup subdirectories

Currently backup to URL with Azure supports subdirectories - this is not currently implemented in the solution so all backup files are dropped into the root folder supplied as a parameter in the job (or default value).

Could this be changed so when a backup to an URL occurs to Azure, subdirectories are used in the same way as when backing up to disk?

DatabaseBackup - option to escalate backups by checking MSDB

In DatabaseBackup, the current behavior of @ChangeBackupType is:

  • N - if they specify log backups, only do log backups
  • Y - if they specify log backups, but the database is in pseudo-full recovery model (like if someone flipped it from full to simple back to full), then change the backup type to full for this database

We need a third option: if there's no full backup in MSDB, escalate to a full.

There's a couple of things driving this:

  • If we restore an existing database to a new name (like to clone it for a new customer), we need to take a full backup right away for that new name. Plus, the database may have been moved to a new server (or data center) as part of the restore/creation process.
  • If we fail over to another data center, we want to get a full backup as quickly as possible after failover.

To do it, I propose that we add a new @ChangeBackupType = 'MSDB' option. There, we check MSDB's backup history to check for a full for this database, and if it doesn't exist, escalate it to full.

Adding it as a new @ChangeBackupType option means it keeps 100% compatibility for all existing users. We're not changing any existing behavior.

Possible gotchas:

  • In the data center failover scenario, if someone isn't purging their MSDB backup history, they may have an old full in there for this database. We could work around that by doing an hours parameter, or adding something like MSDB72, where 72 (or whatever number) indicates the number of hours. I'm not coding that here.

Allow including/excluding of statistics to rebuild

Hello Ola,

thanks for your really great solution which I've worked with during the last years.
One scenario however came up several times, which (to my understanding) is not yet possible. Maybe you can consider it as feature request:

For the index Maintenance you already can either exclude specific indices from the rebuild or chose to only consider specific indices.
Would it be possible to apply the same logic to statistics?

My use case is that I have a scheduled statistic maintenace with OnlyModifiedStatistics = Y, however I have a small number of large tables which take ages to built statistics on, but these tables are rarely used for select queries (mostly data is just written into these tables in new rows). So even though the data was modified I can live with not up-to-date statistics.

It would be great to have a possibility to exclude those particular statistics (either by table name or by statistic name) from the scheduled rebuild and only include those in less frequest rebuilds.

Thanks for considering and keep up the good work!

Running DBCC CHECKDB in the order of the last successful CHECKDB (LastKnownGood)

Hi Ola,
it would be perfect to run DBCC CHECKDB not in alphabetical order.
Maybe we could add an option to run DBCC CHECKDB according to the latest successful CHECKDB (LastKnownGood).
I have a customer with a large instance of 20 databases with each database 2-5 TB in size.
The customer wants that DBCC CHECKDB only runs during the weekend.
The result is, that only the first 5 databases are being consistency checked.
Attached is my approach to add an option called OrderByLastKnownGood.
Best regards,
Oliver
MaintenanceSolution_order_by_LastKnownGood.zip

Database backup to Azure giving incorrect parameter errors

I'm running the script that you have, with my personal values for
EXECUTE dbo.DatabaseBackup @databases = 'USER_DATABASES',
@url = 'https://myaccount.blob.core.windows.net/mycontainer',
@credential = 'mycredential',
@BackupType = 'FULL',
@compress = 'Y',
@verify = 'Y'

my values seem correct, but i'm getting the following errors:
Msg 50000, Level 16, State 1, Procedure DatabaseBackup, Line 862 [Batch Start Line 0]
The value for the parameter @url is not supported.

Msg 50000, Level 16, State 1, Procedure DatabaseBackup, Line 869 [Batch Start Line 0]
The value for the parameter @credential is not supported.

any suggestions on how to fix?

RESAMPLE - Incorrect Syntax

When a RESAMPLE occurs, it fails because of incorrect syntax. The solution is to include WITH before RESAMPLE. I will take this and update and create a PR after testing.

Handling transient errors when backing up to Azure URL

I occasionally see failed backups in one of my instances in an Azure VM that uses the backup to Azure BLOB storage feature. The errors generally look like this in the ERRORLOG:

Error: 18210, Severity: 16, State: 1.
BackupVirtualDeviceFile::RequestDurableMedia: Flush failure on backup device 'https://.blob.core.windows.net//_LOG_20180509_003000.trn'. Operating system error Backup to URL received an exception from the remote endpoint. Exception Message: The client could not finish the operation within specified timeout..

Working with Microsoft Premier Support, they had me add some trace flags that cause me to get BackupToUrl log files. The one that matches with the error above (timezone differences) has the following entries:

5/9/2018 12:35:02 AM: An unexpected exception occurred during communication on VDI Channel.
5/9/2018 12:35:02 AM: Exception Info: The client could not finish the operation within specified timeout.
5/9/2018 12:35:02 AM: Stack: at Microsoft.SqlServer.VdiInterface.VDI.AsyncIOCompletion(BlobRequestOptions options, List`1 asyncResults, CloudPageBlob pageBlob, Boolean onFlush)
at Microsoft.SqlServer.VdiInterface.VDI.PerformPageDataTransfer(CloudPageBlob pageBlob, AccessCondition leaseCondition, Boolean forBackup)
5/9/2018 12:35:02 AM: The Active queue had 0 requests until we got a clearerror
5/9/2018 12:35:02 AM: A fatal error occurred during Engine Communication, exception information follows
5/9/2018 12:35:02 AM: Exception Info: The client could not finish the operation within specified timeout.
5/9/2018 12:35:02 AM: Stack: at Microsoft.SqlServer.VdiInterface.VDI.PerformPageDataTransfer(CloudPageBlob pageBlob, AccessCondition leaseCondition, Boolean forBackup)
at BackupToUrl.Program.MainInternal(String[] args)

After consulting with their internal groups, the Support Engineer replied to me today with the following (the bolding was highlighting in the email from Microsoft):

I was unexpectedly oof for most of the day Friday, and could not wrap up my conversations with PG on this. I have been working with them on yours as well as another case with similar symptoms – the common factor being the use of the backup script being found at https://ola.hallengren.com/ . Based on the analysis and conversations here is the current state:
• Cause/Solution: As noted in the link below when there are sudden spikes in request load for the storage account it could result in few request timeouts. The solution would be to retry the failed operation (in this case backups). This is documented below:

https://docs.microsoft.com/en-us/azure/storage/common/storage-performance-checklist?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#subheading14

Throttling/ServerBusy
In some cases, the storage service may throttle your application or may simply be unable to serve the request due to some transient condition and return a "503 Server busy" message or "500 Timeout". This can happen if your application is approaching any of the scalability targets, or if the system is rebalancing your partitioned data to allow for higher throughput. The client application should typically retry the operation that causes such an error: attempting the same request later can succeed. However, if the storage service is throttling your application because it is exceeding scalability targets, or even if the service was unable to serve the request for some other reason, aggressive retries usually make the problem worse. For this reason, you should use an exponential back off (the client libraries default to this behavior). For example, your application may retry after 2 seconds, then 4 seconds, then 10 seconds, then 30 seconds, and then give up completely. This behavior results in your application significantly reducing its load on the service rather than exacerbating any problems.
Note that connectivity errors can be retried immediately, because they are not the result of throttling and are expected to be transient.
So, please check to see if the script has some sort of a flag to retry the backup operations that fail due to transient conditions with storage accounts.
• Long term: Product group is looking at implementing changes in code to retry the backup failures, but they are looking into pros/cons of the same and it will likely take a few more months before the fix (if approved) is in place.

We will also be working on documentation changes to reflect the learnings from these cases so that customers are betted informed about these issues.

The DatabaseBackup stored procedure does not seem to have any mechanism to retry on error. While I would think that Microsoft should have built this into the BACKUP command code when they added the backup to URL feature, I'm not optimistic about that happening any time soon.

I found this: https://support.microsoft.com/en-us/help/4023679/fix-timeout-when-you-back-up-a-large-database-to-url-in-sql-server-201, which claims the issue was fixed in SQL Server 2012 SP3 CU10, but we are running 2012 SP4 and the problem obviously still exists. We've had the problem when backing up a 16 MB ldf, so their assertion that this happens only on "large" databases doesn't seem right. Or they fixed a different problem from the one I'm seeing. Regardless, I'm hoping that DatabaseBackup can be enhanced to retry on the appropriate errors.

Option to exclude the backup type folder

Please provide a option to remove the folder (e.g., FULL, DIFF, and LOG) for the backup type. For other folders with the backup type embedded (e.g., FULL_COPY_ONLY), remove just the part of the folder name that refers to the backup type (e.g., FULL_COPY_ONLY becomes COPY_ONLY).

For the native SQL backups, this would require changing the DIFF backup extension to something else like "dif" so that the file cleanup works correctly. I don't know what this would mean for other backup products...perhaps this is an invalid option for some. Perhaps also provide an option to set the extension used when making a backup? In this case, it would be the user's responsibility to insure that extensions and cleanup work properly together.

New Feature for CommandExecute : log the @Execute into CommandLog

Hello

I propose to add a column [Executed] char(1) to the CommandLog table.

This column can then capture if the statement was executed or not, from @execute variable.
In order to do this a small code change is needed in SP CommandExecute , at line 130 ( IF @LogToTable = 'Y') , add the variable as insert to the CommandLog table.

Many thanks for sharing your great work.

Best Regards
Filip

IndexOptimize: @Databases parameter checking database nonexistence makes job fail

With latest version on IndexOptimize, a new check on the @databases parameter was introduced to check if a database exists (lines 908-913) for Availability Groups verification, if not, it raises and error. Yesterday I noticed a new version and, as many times before, modified to put a N on the create jobs parameter and ran the script. But tonight one of our defrag index jobs failed with this error:

There was an issue with defragmenting of these databases
ALL_DATABASES,-mydatabase,-mynextdatabase,-evenmoredatabases...
The following databases in the @databases parameter do not exist: [mydatabase]

On our case, the database name had the hyphen character (-) to exclude this particular database from the index so it would not affect anything else. We also have a AOAG group with 3 servers and several databases and everything works ok. I removed the database from the parameter, but I would suggest adding a check for the hyphen character as that database in particular will not affect the process.

Set procs to recompile to avoid plan cache noise

Hi Ola,

We do this across the Blitz* scripts:

ALTER PROCEDURE [dbo].[sp_Blitz]
    @Help TINYINT = 0 ,
...
WITH RECOMPILE
AS

Which helps us to avoid interfering in anyone's plan cache, or one of our procs being detected by... well, another one of our procs. Heh.

This would cover all the non-dynamic SQL statements in your procs, but the dynamic bits would (I believe) need recompile hints individually.

On the plus side, the only one I've seen cause any kind of resource consumption is the call to sys.dm_db_index_physical_stats in dbo.IndexOptimize.

Low priority, but I appreciate your time.

Thanks,
Erik

LitreSpeed Adaptive Compression

When using LiteSpeed compression you can only set @CompressionLevel values from 0 - 8.

It would be great to be able to use @adaptivecompression instead of @CompressionLevel.

https://support.quest.com/technical-documents/litespeed-for-sql-server/8.5/installation-guide/xp_backup_database#@adaptiv

@adaptivecompression Automatically selects the optimal compression level based on CPU usage or Disk IO. You can tell Adaptive Compression to optimize backups either for size or for speed. This argument accepts one of the following values:
Size
Speed

Option to ignore "Lock Request Time out" (error 1222) during IndexOptimize

Hi Ola,
it would be great if we could have an option to ignore error 1222 during IndexOptimize.
My customers try to work around this issue.
Some customers create exceptions for specific agent jobs.
Some customers create one additional job step, to ignore this error.
It would be useful to have an option to ignore error 1222.
Best regards,
Oliver

Possible use of database extended properties to select DB's to Integrety check

We currently use your scripts for integrity checks to split VLDB and smaller DB’s into their own jobs. We are now getting issues with timings and would like to split the non VLDB into two jobs. I was wondering if it is possible to use extended properties to decide which DB’s to select rather than excluding and including, as the extended properties can be created when a new DB is created and we would not have to change the jobs

Backups fail for clusterless AG configuration

the script is recognizing databases contained in a clusterless AG, however is not working correctly because there is no cluster name. i fixed the script locally by checking the clustername for N'' and naming it clusterless which allows backups.

Backups of SQL 2017 AG Secondary fail

I have a SQL 2017 Availability Group and when I run the command:

EXECUTE [dbo].[DatabaseBackup] @Databases = 'USER_DATABASES', @Directory = N'H:\MSSQLBackups', @BackupType = 'FULL', @Verify = 'Y', @CleanupTime = 72, @CheckSum = 'Y', @LogToTable = 'Y'

on the secondary server of the AG, I get the message:

Msg 978, Level 14, State 1, Line 3
The target database ('AGTest') is in an availability group and is currently accessible for connections when the application intent is set to read only. For more information about application intent, see SQL Server Books Online.

The AG is set to allow Read-intent only connections on the secondary which is the same configuration I have in other AGs that are earlier versions of SQL Server. Does anyone have a workaround for this?

Ken

IndexOptimize @Indexes excluded table included in output

Including a space after an index name when excluding indexes results in the excluded index being included in the results.

Here is an example of the @Indexes parameter with a space before the comma after the first excluded index. This results in Table1 being included instead of excluded.

@Indexes = 'ALL_INDEXES, -MyDB.dbo.Table1 , -MyDB.dbo.Table2'

Removing the space before the comma results in the expected behavior of excluding the table.
@Indexes = 'ALL_INDEXES, -MyDB.dbo.Table1, -MyDB.dbo.Table2'

The execution results for both scenarios is attached.
ExecutionOutput.txt

Issues with Directory Structure/File Name Extension feature

I just tried to reconfigure backup jobs using the new capabilities for Directory Structure/File Name Extension feature. However, I get "The value for the parameter @Cleanuptime is not supported. Cleanup is not supported if the token {BackupType} is not part of the directory." when setting @Cleanuptime as before and …

In full backups jobs:

@DirectoryStructure = '{ServerName}${InstanceName}{DirectorySeparator}{DatabaseName}{DirectorySeparator}',
@FileName = '{ServerName}${InstanceName}_{DatabaseName}_{Year}{Month}{Day}_{Hour}{Minute}{Second}_{FileNumber}_{BackupType}_{Partial}_{CopyOnly}.{FileExtension}',
@FileExtensionDiff = 'bak'

It doesn't matter if @FileExtensionDiff is provided or not.

In differential backup jobs:

@DirectoryStructure = '{ServerName}${InstanceName}{DirectorySeparator}{DatabaseName}{DirectorySeparator}',
@FileName = '{ServerName}${InstanceName}_{DatabaseName}_{Year}{Month}{Day}_{Hour}{Minute}{Second}_{FileNumber}_{BackupType}_{Partial}_{CopyOnly}.{FileExtension}',
@FileExtensionDiff = 'dif'

Expected behavior in folder structure and file names would be:
ServerName\DatabaseName\ServerName_DatabaseName_20180524_080000_FULL.bak
ServerName\DatabaseName\ServerName_DatabaseName_20180524_080100_DIFF.dif

Resorting the file name parts would ensure correct sort order based on date and time and .bak/.dif file name extensions would be distinct for cleanup procedure xp_delete_file.
This is how my folder/file structure looked like when I took my backups via Maintenance Plans that perfectly cleaned up the right file extensions using different cleanup times.

MaintenanceSolution default values

Are you willing to take input on the default values for this solution? It seems like the purpose of this .sql file is to make a simple install that fits most cases. I would say that for the SQL Agent jobs for backups, it would make more sense to have a non-NULL value for CleanupTime and a NULL value for Directory

Also perhaps injecting some standard schedules (not applied to any of the jobs) would be helpful such as

  • Daily 7pm
  • Hourly 1h
  • Weekly Saturday 6pm
  • etc

(but that's perhaps going a bit out of scope)

Support for incremental statistics

Would like support for incremental statistics that was introduced with SQL Server 2014.
It would be very nice to be able to cut down maintenance time on big partitioned tables even more, and be able to maintain good quality stats without scanning too much.
Perhaps run it at the same time when executing partitioned index maintenance, and update the stats for the same partition, if the option is turned on?

Thanks,
Jan

Custom Retention Policies

It would be great to have custom retention policies i.e. rather than keeping for x hours something like keep 1 full backup per x months
1 full backup per x weeks

Re-indexing a databases table by size

Please look at re-indexing a databases tables/indexes based on the size of the index. If you re-index smaller tables/indexes first then you reduce the chance of the database expanding because you don't have enough free space for larger tables.

Enable support for Resumable index builds in conjunction with @TimeLimit

Strict maintenance windows (such as those imposed by Azure Automation -- 3 hours) can cause maintenance on large tables to fail.

@Timelimit can be used to limit the overall execution time, however it may only get partially through an index before the time limit is reached.

Resumable Online Index Rebuilds along with the "retry" nature of Azure Automation could complete a long running job over several iterations.

IndexOptimize runs a FOR XML query that executes in AUTO mode, which returns references to derived table aliases.

Under compatibility level 90 or later, the query returns references to the derived table alias instead of to the derived table's base tables, so you should modify your application as required to account for the changes. For more details, please see: Line 1343, Column 60.

Here’s the affected block.

Current:

    IF @CurrentIndexID IS NOT NULL AND (@CurrentPageCount IS NOT NULL OR @CurrentFragmentationLevel IS NOT NULL)
    BEGIN
    SET @CurrentExtendedInfo = (SELECT *
                                FROM (SELECT CAST(@CurrentPageCount AS nvarchar) AS [PageCount],
                                             CAST(@CurrentFragmentationLevel AS nvarchar) AS Fragmentation
                                ) ExtendedInfo FOR XML AUTO, ELEMENTS)
    END

Should be:

    IF @CurrentIndexID IS NOT NULL AND (@CurrentPageCount IS NOT NULL OR @CurrentFragmentationLevel IS NOT NULL)
    BEGIN
    SET @CurrentExtendedInfo = (SELECT *
                                FROM (SELECT CAST(@CurrentPageCount AS nvarchar) AS [PageCount],
                                             CAST(@CurrentFragmentationLevel AS nvarchar) AS Fragmentation
                                ) ExtendedInfo FOR XML RAW('Fragmentation'), ELEMENTS)
    END

Filter statistics on is_incremental

Its regarding new Indexoptimization script released recently with incremental statistics support. Is there any possibility to do add a filter so that Incremental stats can be done separately on partitioned tables and regular stats separately. For ex. if i want to do incremental stats only i want to use INCREMENTAL=Y, if no regular stats.

Setting to help job restart where it left off in case of error

We have run into an issue with a server we have that has a very long running backup job. Occasionally the full backup job will fail for some reason, often after it has been running for many hours. We can’t simply restart the job because it will start backing up the databases that it has already backed up when we need it to pick up where it left off and finish the list.

I would like to add an optional parameter to help resolve this issue. This new parameter would allow us to skip any database that would otherwise be included in the backup job if it has already been backed up within the last X number of days. In the event of a failure I could then set this parameter to 2 and fire the job off again.

Add Backing up SSAS databases to the backup module

This would be a very nice addition to the scripts. I am downloading it today again and have not seen this as a part of the tooling. Of course maybe I missed it.
But if not, this would be a very cool addition to the current tooling.

Delete backups from Secondaries when @cleanupMode is set to 'BEFORE_BACKUP'

Consider a scenario where you have an AG running across 3 nodes and your backups are being backed up locally. Each node have all the default jobs deployed and scheduled to run at same time. As the databases are part of the AG, backups are only being taken on the PRIMARY NODE. After failover to a different node, the backups stored in the previous PRIMARY are not being deleted.

I thought that this could be happening since the default for the cleanup is AFTER_BACKUP and since no backups were happening on that node, the cleanup was not happening. I tried changing this to 'BEFORE_BACKUP' to no avail.

@olahallengren mentioned in stack exchange ( https://dba.stackexchange.com/questions/206902/cleanup-of-ag-databases-on-secondaries-nodes)

"The current design is that the stored procedure will decide if the database should be backed up. Only if the database should be backed up, it will go into the code that does the work (creates sub-directories, backup, verify, and cleanup).

It could make sense that if you are running with @CleanupMode = 'BEFORE_BACKUP', then it should delete backups, even if the database should not be backed up.
"

Backup multiple database at the same time using multiple agent jobs

Current behavior: Only one DatabaseBackup job can be configured and combinations of @databases such as 'USER_DATABASES' or 'USER_DATABASES,-Db1' don't allow being dynamic enough for adds changes and deletes in larger multi-tenant environments

Is there any simple way to work on backing up multiple user databases by creating multiple SQL Agent jobs and break on something like database_id or some other method to have it fairly evenly distributed and to not miss or duplicating any backup work?

Example: Say I have 400 user databases and want to create 4 Agent Jobs for DatabaseBackup - USER_DATABASES – FULL. How can I split them evenly and have them all working on separate databases and still make it dynamic enough so no database are skipped, omitted or left behind. I do not care if one job runs a little longer than another.
Simply having a single job that is only able to work on backing up a single database at a time unnecessarily extends the total time/job duration. The use of multiple processes can drastically reduce this time in a larger multi-tenant environment.

Would additional want the same functionality with Log and Diff backups.

Option to allow for custom text in the backup file name

We perform a lot of one off backups of databases, prior to manual database updates, or other random work. We like to append our names and a description to these backups to help us when it comes time to clean them up (we don’t feel comfortable deleting some of these on a regular schedule).

I'm thinking something along the lines of @CustomText = '_TicketNNNN_BeforeManualUpdate_sbryant', which would create a backup named "ServerName$InstanceName_DatabaseName_FULL_COPY_ONLY_20180531_082341__TicketNNNN_BeforeManualUpdate_sbryant.bak". The custom text could go anywhere in the name, just wanted to provide an example.

Let me know if I can provide any additional information.
Thanks!
Sam

Log and Incremental Backups not running in SQL 2016 HA AGs

I'm not sure if this is a bug or not, but I set up a three-node SQL 2016 AlwaysOn High Availability Availability Group.

Without realizing the impact, I set the backup preferences of this AG to perform backups on Any Replica, and left the backup priority at the default 50 on all three nodes.

I installed the maintenance solution on all three nodes.

The Full backups from the default USER_DATABASES - FULL job runs fine per the schedule I set.

The Incremental backups and the Log backups do not run. The job runs, but it skips all databases in the AG. If I run the execute step manually, it lists those AG databases with the AG name, but no backups occur.

If I change the backup priority so they are not equal, then the logs and incremental backups run successfully from the node with highest priority.

I'm not sure if the script is looking at the backup priority value, deciding it's not higher than any of the other nodes, and skipping it? Maybe because it assumes some other node with a higher priority should do the job?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.