xeokit / xeokit-sdk Goto Github PK
View Code? Open in Web Editor NEWOpen source JavaScript SDK for viewing high-detail, full-precision 3D BIM and AEC models in the Web browser.
Home Page: https://xeokit.io
License: Other
Open source JavaScript SDK for viewing high-detail, full-precision 3D BIM and AEC models in the Web browser.
Home Page: https://xeokit.io
License: Other
Canvas
injects a DIV into the DOM for the purpose of providing a colored background for the canvas, when transparent. This div is full screen and has the default blue color.
This needs to be disabled.
We need to be able to pin annotations on surfaces of PerformanceModel Entities.
Implement this capability using the depth-buffer technique used for 3D picking in SceneJS.
The LICENSE file is LGPL, but the documentation states GPL.
On devices where window.devicePixelRatio
returns a value greater than 1 , there is a visual reduction of perceived quality on the 3d rendering.
This happens on most mobile devices (pads and phones), where the canvas size as calculated by the viewer/scene/canvas/Canvas.js
class can have a much lower pixel count than what's the phyisical pixel count of the canvas, due to DPI scaling by the browser.
One simple solution seems to be allowing to externally provide the desired devicePixelRatio
(ideally taken from window.devicePixelRatio
) when creating the instance of the Viewer
class and propagate it all the way down to the Canvas
class (falling back to 1, for example) so that it can take into account that "scaling factor" when setting the canvas width/height and the viewport bounds.
I've did a quick test and that seems to have the effect of now having nice sharp edges on the mobile viewer (both tablet and phone), although I still have some issues with it (e.g. performance picking seems to suffer a clipping efffect (it only works within a region of the viewer canvas)).
If you want, I can create a PR with those changes (propagating the devicePixelRatio
down to the Canvas
) so that we can collaborate further in solving this ๐
Ability to incrementally load (ie. stream geometry into) a PerformanceModel while it is rendering.
Extend PerformanceModel with "tiles". These are optional bins within which entities may be created. Finalizing a tile makes it immediately visible.
createTile({ // Start building a tile
id: "myTile1"
})
// Create a reusable geometry
createGeometry({ // Sharable (instanced) geometry
id: "myGeometry1",
tileId: "myTile1",
positions:[..],
normals: [..],
indices: [..],
tileId: "bla" // <<---------- Include this geometry to our tile
})
// Create mesh with own inline geometry
createMesh({ // Mesh with unique geometry
id: "myMesh1",
tileId: "myTile1", // <<---------- Add mesh to our tile
positions:[..],
normals: [..],
indices: [..]
})
// Create mesh that uses our reusable geometry
createMesh({// Mesh with instanced geometry
id: "myMesh2",
tileId: "myTile1", // <<---------- Add mesh to our tile
geometryId: "myGeometry" // <<--- Geometry must be in same tile
})
// Create entity that contains our two meshes
createEntity({
id: "myEntity1",
tileId: "myTile1", // <<---------- Add entity to our tile
meshIds: ["myMesh1", "myMesh2"] // <<--- Meshes must be in same tile
})
// Start building another tile concurrently
createTile({
id: "myTile2"
});
// Create the entities in our first tile
// We can then no longer add anything to that tile
finalizeTile("myTile1");
//...
Some objects that have reused geometries are missing after the model has been imported.
If we disable geometry instancing, to force geometry batching, by inserting the false
in GLTFPerformanceLoader:
then those geometries are visible again. So this looks like a breakage in the geometry instancing mechanism of PerformanceModel.
When we destroy a drawable, ie. ````Mesh#destroy()```, it does not get removed from the renderer display list.
In BatchingPickNormalsRenderer
, BatchingPickMeshRenderer
and BatchingPickDepthRenderer
, drawElements count and offset to draw only one entity.
Currently these renderers needlessly redraw all meshes, ie:
gl.drawElements(state.primitive, state.indicesBuf.numItems, state.indicesBuf.itemType, 0);
PerformanceModel only supports 2D picking of entire entities.
Need to implement 3D surface picking for PerformanceModel.
PerformanceModel#colorize
and PerformanceNode#colorize
are disabled until complete.
For some reason GitHub is not providing source code view for at leastthis example (but provides it for most others): https://github.com/xeolabs/xeokit-sdk/blob/master/examples/canvas_screenshots_png_getSnapshot.html
Encapsulating management of HTML within JS is an anti-pattern - better to manage HTML externally to the likes of NavCubePlugin
etc.
Change NavCube
to accept ID of canvas, ie:
new NavCubePlugin(viewer, {
canvasId: "myNavCubeCanvas",
visible: true, // Initially visible (default)
size: 250, // NavCube size in pixels (default is 200)
alignment: "topRight", // Align NavCube to top-left of Viewer canvas
topMargin: 170, // 170 pixels margin from top of Viewer canvas
cameraFly: true, // Fly camera to each selected axis/diagonal
cameraFitFOV: 45, // How much field-of-view the scene takes once camera has fitted it to view
cameraFlyDuration: 0.5 // How long (in seconds) camera takes to fly to each new axis/diagonal
});
Setting Entity
edges
true
causes whole PerformanceModel
to be rendered with edges enhanced.
Flag masking is not implemented for edges
in batchingEdgesShaderSource.js
and instancingEdgesShaderSource.js
.
Adds the ability to inject GLTFLoaderPlugin
with a custom data access strategy through which it can load glTF JSON and binary arrays buffers. By default, the GLTFLoaderPlugin
its own instance of a GLTFDefaultDataSource
, which loads assets using an XMLHttpRequest
.
The example below shows how to create a custom data access strategy that uses xeokit's utils
module (which is actually private, but we'll use it here to demonstrate things).
import {Viewer} from "../src/viewer/Viewer.js";
import {GLTFLoaderPlugin} from "../src/plugins/GLTFLoaderPlugin/GLTFLoaderPlugin.js";
import {utils} from "./../src/viewer/scene/utils.js";
const viewer = new Viewer({
canvasId: "myCanvas",
transparent: true
});
// Custom data access strategy - implementation happens
// to be the same as GLTFDefaultDataSource
class MyDataSource {
constructor() {
}
// Gets metamodel JSON
getMetaModel(metaModelSrc, ok, error) {
console.log("MyDataSource#getMetaModel(" + metaModelSrc + ", ... )");
utils.loadJSON(metaModelSrc,
(json) => {
ok(json);
},
function(errMsg) {
error(errMsg);
});
}
// Gets glTF JSON.
getGLTF(glTFSrc, ok, error) {
console.log("MyDataSource#getGLTF(" + glTFSrc + ", ... )");
utils.loadJSON(glTFSrc,
(gltf) => {
ok(gltf);
},
function(errMsg) {
error(errMsg);
});
}
// Gets glTF JSON.
getArrayBuffer(glTFSrc, binarySrc, ok, error) {
console.log("MyDataSource#getBinaryl(" + glTFSrc + ", " + binarySrc + ", ... )");
utils.loadArraybuffer(binarySrc,
(arrayBuffer) => {
ok(arrayBuffer);
},
function(errMsg) {
error(errMsg);
});
}
}
const gltfLoader = new GLTFLoaderPlugin(viewer, {
dataSource: new MyDataSource()
});
const model = gltfLoader.load({
id: "myModel",
src: "./models/gltf/duplex/scene.gltf",
metaModelSrc: "./metaModels/duplex/metaModel.json", // Creates a MetaObject instances in scene.metaScene.metaObjects
edges: true
});
Could you add support for this?
I manually disabled lambertian in batchingDrawShaderSource.js. That got me close, but washed out the color.
Thanks.
You have multiple locations that reimplement loadJSON
and eval
on the result to load the JSON. This could be combined into the one that's already in utils.js
. There we can use JSON.parse
to safely load the JSON or use the error callback on parser errors:
Instead, I would assume loadJSON
(https://github.com/xeolabs/xeokit-sdk/blob/master/src/viewer/utils.js#L158) already returns you a parsed JSON error or calls the error callback in case of parsing errors.
In viewer/scene/PerformanceModel/lib/batching/batchingBuffer.js
, following piece of code can be found:
const MAX_VERTS = SLICING ? (bigIndicesSupported ? 5000000 : 65530) : 5000000;
That code works flawlessly on desktop browsers.
BUT on Android/iOS devices, having those big initial sizes is something the mobile versions of the JS engines don't seem to like (tested it with Chrome+Firefox on Android and Chrome+Safari on iOS) .
In the best case, it's possible to attach the PC web inspector to the devices and see an error like "you consumed too much memory". In the worst case (Safari/iOS), the browser TAB simply crashes (with a very generic error message).
Of course this does not happen for the case of loading one performance model (via GLTF loader plugin in my case), but as more and more models are loaded, that big initial size combined with things like (on same class):
this.indices = bigIndicesSupported ? new Uint32Array(MAX_VERTS * 6) : new Uint16Array(MAX_VERTS * 6); // FIXME
this.edgeIndices = bigIndicesSupported ? new Uint32Array(MAX_VERTS * 6) : new Uint16Array(MAX_VERTS * 6); // FIXME
(those two previous lines alone pre-allocate 5M ยท 6 ยท sizeof(uint32 or uint16) = 120 or 60 MB)
I've successfully tested with lowering the first line from 5M to 200k (after some trial and error):
const MAX_VERTS = SLICING ? (bigIndicesSupported ? 200000 : 65530) : 200000;
And then I was able to load four quite complex models at the same time on mobile browsers.
The reason to title "discussion of..." to this issue is because I'm not fully aware of the consequences of this proposed change (nor of the "most proper" intial array sizes).
e.g. I also see that that 5M number is also used on viewer/scene/PerformanceModel/lib/batching/batchingLayer.js
:
const tempUint8Vec4 = new Uint8Array((bigIndicesSupported ? 5000000 : 65530) * 4);
So there we go, just wanted to share this finding and have some discussion about if it's a good change.
Hi, I was checking out this SDK as a replacement for BIMSURFER, Great Job ๐.
But Im facing an issue, the plugin loads structural IFC files properly, but for IFC files containing pipes or those type of geometries, the objects look pixelated and shake while moving the camera.
Does this has anything to do with the plugin being based on PerformanceModel.
Originally posted by @sha-N in #5 (comment)
On viewer/scene/mesh/Mesh.js
, there is following piece of code:
/**
* Comparison function used by the renderer to determine the order in which xeokit should render the Mesh .... etc
*
* xeokit requires this method because Mesh implements {@link Drawable}.
*
* Sorting is essential for rendering performance... etc
*
* ... etc
*/
stateSortCompare(mesh1, mesh2) {
return (mesh1._state.layer - mesh2._state.layer)
|| (mesh1._drawRenderer.id - mesh2._drawRenderer.id) // Program state
|| (mesh1._material._state.id - mesh2._material._state.id) // Material state
|| (mesh1._geometry._state.id - mesh2._geometry._state.id); // Geometry state
}
It happens that when in the scene there are mixed models: a) loaded with GLTFLoaderPlugin
; b) created by hand that use new ReadableGeometry
, the previous code breaks because for either mesh1._state
or mesh2._state
is not set.
Simply modifying the function body to...
stateSortCompare(mesh1, mesh2) {
if (!mesh1._state || !mesh2._state)
{
return 0;
}
return (mesh1._state.layer - mesh2._state.layer)
|| (mesh1._drawRenderer.id - mesh2._drawRenderer.id) // Program state
|| (mesh1._material._state.id - mesh2._material._state.id) // Material state
|| (mesh1._geometry._state.id - mesh2._geometry._state.id); // Geometry state
}
... seems to workaround the problem.
Not sure this is the proper solution, though.
Do you want me to submit a PR with the change? ๐
Following piece of code is found in loadJSON
function of viewer/scene/utils.js
:
request.addEventListener('load', function (event) {
var response = event.target.response;
if (this.status === 200) {
try {
ok(JSON.parse(response));
} catch (e) {
err(`utils.loadJSON(): Failed to parse JSON response - ${e}`);
}
} else if (this.status === 0) {
// Some browsers return HTTP Status 0 when using non-http protocol
// e.g. 'file://' or 'data://'. Handle as success.
console.warn('loadFile: HTTP Status 0 received.');
ok(response);
} else {
err(event);
}
}, false);
Notice how the success code is different in case of received status code 200 and 0:
Code for received status code 200:
try {
ok(JSON.parse(response));
} catch (e) {
err(`utils.loadJSON(): Failed to parse JSON response - ${e}`);
}
Code for received status code 0:
ok(response);
Bugfixing is quite straightforward, and only requires treating it the same way for status code 0 as in status code 200.
Do you want me to create a PR for this one? ๐
We can do this immediately, before we have the quantized positions sorted out.
When calling SectionPlane#destroy()
, if the SectionPlane
was created by a SectionPlanesPlugin
then the plugin does not get notified, so continues to show it in the overview canvas, and continues to provide a control gizmo for it.
const sectionPlane = sectionPlanesPlugin.createSectionPlane({...});
sectionPlane.destroy(); // SectionPlanesPlugin doies not get notified
Undefined 'ctx' in these examples. Can be seen in JS console when running these.
Im trying to use xeokit-sdk in a react app with webpacker
I retrieve the source from github in my package.json like this
"xeokit-sdk": "https://github.com/xeokit/xeokit-sdk.git#aeb49af837f4d13e325ff0eb5d389f9ad180a7d2"
All gets retrieved and downloaded to npm_modules dir.
But how do import for example the viewer and the gltfplugin ?
Obviously statements like this dont work ...
import {Viewer} from "../src/viewer/Viewer.js";
import {GLTFLoaderPlugin} from "../src/plugins/GLTFLoaderPlugin/GLTFLoaderPlugin.js";
I tried also a lot of variants .....
What would be the statement in webpacker environment ?
In xeogl i did like this
import Xeogl from 'xeogl';
Knowing how to use the classes in webpack env would be relevant for a lot of people i guess
Problem was the positionsDecodeMatrix
uniform being loaded only when the instancing pick shader was bound, not when it was rendering. Caused all instanced meshes to use the same positionsDecodeMatrix
.
The query in loadBIMServerMetaModel
is for ifc2x3tc1 and does not support ifc4.
Updating the query based on the current project schema could solve this, I can make a pull request if you like.
The Viewer#destroy()
method is not calling Scene#destroy()
.
As a consequence, when we destroy a Viewer
then subsequently create another Viewer
in the page, we end up with two sets of spinner DOM elements, because the last set was not destroyed in Scene#destroy()
.
Currently it's an empty method. Needs to be implemented thus:
destroy() {
this.scene.destroy();
}
Hi there,
I am trying out xeokit-sdk
and thinking it will be of great help to have it avail on npm registry.
For now I have checked the git repo in my dependencies with a commit number, but then there is still the question of versioning. I saw the github tags in the repo but the package.json
file still have default version number 1.0.0
Will be neat to have a npm dep with proper version number if possible.
FWIW I have used release-it to solve similar issues in previous projects and found it quite handy.
These are not implemented yet, so currently PerformanceModel#setPickable()
and PerformanceModel#setColorize()
are just empty methods.
PerformanceModel
is used as the default scene representation within BIMServerLoaderPlugin
and GLTFLoaderPlugin
. If you need these two functions for Entity's loaded by these plugins, then a workaround (for non-huge models) is to load models with performance: false
, which will internally use the xeokit's scene graph representation, which has Node
and Mesh
Entity
types which support pickable
and colorize
.
const bimServerLoader = new BIMServerLoaderPlugin(viewer, {
bimServerClient: bimServerClient
});
const model = bimServerLoader.load({
id: "myModel",
poid: ...,
roid: ....,
schema:....,
performance: false // <<--------------------- Add this
});
When PerformanceModel contains more that one BatchingLayer, some objects appear distorted.
Each BatchingLayer has a dequantization matrix, which is used to decompress its positions within the shader. This bug was caused by the wrong dequantization matrix being loaded for each BatchingLayer.
Hi,
I work on a EPLAN plugin loader WIP, I know that Xeogl is build for BIM, but it is really good as MCAD viewer.
ToDo
Could you add support for rendering both front and back faces for the PerformanceModel?
Manually disabling gl.CULL_FACE in Renderer.js does the trick, but that was a big hack.
Also, could you explicitly add an edgeThreshold parameter in the PerformanceModel configuration object? Currently, it is hardwired to 10 degrees.
Position information can be retrieved from bimserver using ServiceInterface method: "getModelMinBounds".
Use that auto-center and auto-scale models as they are loaded.
Intention is to prevent quantization errors as reported in #15, which occur when model is too large or is positioned too far from the World-space origin.
We need a 3D gizmo for users to control the position and orientation of cross-section planes.
This is currently in progress and the gizmo looks like the screenshot below. Drag the arrows to translate the gizmo, drag the curved handles to rotate.
const sectionPlanes = new SectionPlanesPlugin(viewer);
const sectionPlane = sectionPlanes.createSectionPlane({
id: "mySectionPlane",
pos: [0, 0, 0],
dir: [0.5, 0.0, 0.5]
});
// each SectionPlane has a control, which is initially invisible
const sectionPlaneControl = sectionPlanes.controls["mySectionPlane"];
sectionPlaneControl.setVisible( true ); // User can now interact to reposition the section plane
sectionPlaneControl.setVisible( false ); // Dismiss control when done
Work remaining:
xeokit is derived from xeogl, but with many performance improvements and features. Detail these in a new FAQ entry.
In example that uses BIMServerLoaderPlugin to load the Schependomlaan model, mousing over some objects, like the roof, highlights the wrong objects.
However, the problem does not exist with the equivalent glTF model loaded with GLTFLoaderPlugin.
But this bug can be reproduced using the geometry batching bemchmark, where some objects appear to be unpickable. This might be due to running out of picking color range for large numbers of objects.
This specification is work-in-progress.
This is a measurement tool inspired by the StreamBIM measurement plugin, shown here:
TODO
Hello,
As we briefly discussed before, here I open an issue to expose an idea related to the performance of model loading (GLTF models in my case).
The background
When we load a GLTF model (whose size is 8.6 MB) using the GLTFLoaderPlugin
class, following can be observer when we launch a performance profile (on Chrome in this case):
This means that it takes 2.6s to load a quite small model.
Further analysis reveals where the time is spent:
So the main time consuming processes are in viewer/scene/math/buildEdgeIndices.js
(54.8 % of the model load time) and the transformAndOctEncodeNormals
method in viewer/scene/PerformanceModel/lib/batching/batchingLayer.js
(25.4 % of the model load time).
This (for that 8.6 MB GLTF file) means that two mentioned pieces of code take themselves more than 80% of the model loading time.
Just before going to the real point of this issue, let's talk a little bit about the two previous pieces of code.
What is the purpose of viewer/scene/math/buildEdgeIndices.js
Looking at the code, this comment is the key:
// an edge is only rendered if the angle (in degrees) between the face normals of the adjoining faces exceeds this value. default = 1 degree.
So the code (by using some welding algorithm) filters out edges if the two faces that are part of the edge do not have a minimum angle between normals.
This makes all sense, so that when enabling edge rendering coplanar faces do not get their connecting edges drawn.
So the purpose of this code seems to be a cleanup of the edges of the mesh.
What is the purpose of transformAndOctEncodeNormals
in viewer/scene/PerformanceModel/lib/batching/batchingLayer.js
?
I'm not an expert in 3D graphics, but it seems that it is applying some "octhaedron encoding" on the normals of the faces so that they either behave better when rendered from the GPU (better could be more efficient GPU usage or taking less space in GPU memory (the encoding result are only two components instead of the 3 XYZ components if not encoded)).
So the purpose of this code seems to be preparing the normals of the geometry for the GPU rendering.
The real point of this issue
The previous two analyzed pieces of code are actually a preparation stage of the geometry for having optimal GPU based rendering.
The idea is that those two pieces of code do not do any calculations that depend on the runtime or the interaction between loaded models, so they could be precomputed.
This what (in my case) I have:
(1)
by some convenience too that extracts data from IFC models
(2)
by the GLTFLoaderPlugin
class, that loads the GLTF file in-JS-memory
(3)
corresponds more or less to those two analysed pieces of code
(4)
(I think) it's a combination of shaders and binded webgl arrays
The point is that the process done by (3)
is the one who is taking more than 80% of the time dedicated to model loading in xeokit.
If the following could be done...
... that would mean that xeokit, during 3d visualization runtim would not have to do an as much computation demanding pre-processing stage in order to load the model geometry into the GPU, and would allow the model load time by (around) 80% ๐
What will I try to do
If I time pressure at work allows me to do so, I will try to do steps towards a working prototype with the presented idea.
That would make xeokit have a world class model loading performance ๐
I will try to keep this issue updated with the progress :-)
Loading models with complex geometries result in following error
ERROR :GL_INVALID_OPERATION : glDrawElementsInstancedANGLE: attempt to access out of range vertices in attribute 0
Google says it might be because of wrong buffer size.
Is this expected at this stage and is being worked on?
@xeolabs this issue can be reproduced by loading the IFC file I sent earlier.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.