segmentio / analytics-php Goto Github PK
View Code? Open in Web Editor NEWThe hassle-free way to integrate analytics into any php application.
Home Page: https://segment.com/libraries/php
License: MIT License
The hassle-free way to integrate analytics into any php application.
Home Page: https://segment.com/libraries/php
License: MIT License
Currently, #file_consumer approach needs to run file_reader.py
, which calls client#flush
from your analytics-python lib. For those who can't install pip packages, this won't work, and so a good batching solution is lost.
It would probably be best to port all the python implementations into php.
On my developpment machine (ubuntu 12.04 / php5.3.10 / symfony 2.0), at random time, the lib fail and report this error :
Fatal error: Uncaught exception 'ErrorException' with message 'Notice: fwrite(): send of 577 bytes failed with errno=32 Broken pipe in /home/cethy/edu/vendor/Analytics-php/lib/analytics/consumers/socket.php line 134' in /home/cethy/edu/vendor/symfony/src/Symfony/Component/HttpKernel/Debug/ErrorHandler.php on line 67
ErrorException: Notice: fwrite(): send of 577 bytes failed with errno=32 Broken pipe in /home/cethy/edu/vendor/Analytics-php/lib/analytics/consumers/socket.php line 134 in /home/cethy/edu/vendor/symfony/src/Symfony/Component/HttpKernel/Debug/ErrorHandler.php on line 67
Call Stack:
0.6099 28510220 1. Analytics_Client->__destruct() /home/cethy/edu/vendor/Analytics-php/lib/analytics/client.php:0
0.6099 28510220 2. Analytics_QueueConsumer->__destruct() /home/cethy/edu/vendor/Analytics-php/lib/analytics/client.php:40
0.6099 28510220 3. Analytics_QueueConsumer->flush() /home/cethy/edu/vendor/Analytics-php/lib/analytics/consumers/queue_consumer.php:29
0.6099 28510472 4. Analytics_SocketConsumer->flushBatch() /home/cethy/edu/vendor/Analytics-php/lib/analytics/consumers/queue_consumer.php:64
0.6100 28511780 5. Analytics_SocketConsumer->makeRequest() /home/cethy/edu/vendor/Analytics-php/lib/analytics/consumers/socket.php:84
0.6100 28512000 6. fwrite() /home/cethy/edu/vendor/Analytics-php/lib/analytics/consumers/socket.php:134
0.6100 28513556 7. Symfony\Component\HttpKernel\Debug\ErrorHandler->handle() /home/cethy/edu/vendor/Analytics-php/lib/analytics/consumers/socket.php:134
Is it coming from my configuration or is it a known issue ?
Thanks.
https://segment.com/docs/connections/sources/catalog/libraries/server/php/
Segment::init("YOUR_WRITE_KEY", array(
"consumer" => "lib_curl",
"debug" => true,
"max_queue_size" => 10000,
"batch_size" => 100
));
max_queue_size Number, optional | The max size of the queue, defaults to 10000 items.
Xcode implemented a recent update that required a change to local php ini settings
It appears that any new events being written while the file consumer is processing the current backlog via send.php are silently (and completely) lost.
I set a cron job to run every few minutes as recommended in the docs and began processing a large backlog of events. Any new events that occurred while send.php was running were completely lost (as indicated by the dips in the chart below):
Hello,
I saw you made a lot of change on the new release but nothing's working anymore :/
Is ther a documetation where I can see how to change on my application to make it working ?
Thx in advance ;)
Quick question.
Does the analytics-php SDK use blocking calls?
Our use case:
If the request to send the tracking information to Segment is blocking, ideally we'd wrap the track function in a "shutdown" clause, so that our API response is sent back to the user, before we try and send the tracking information to Segment.
I'd love some idea of whether we need to do this or not?
Look into speeding up the unit test after the addition of the 10 second per 250 batch wait defaults were added
We have been getting the following warning periodically, though it does not appear to impact the data being sent. Assuming this is why the request is retried, to avoid random errors like this, but just wanted to make sure this was not indicative of some larger problem, hence posting this!
A PHP Error was encountered
Severity: Notice
Message: fwrite(): send of 1747 bytes failed with errno=110 Connection timed out
Filename: Consumer/Socket.php
If this is anything more than expected, happy to provide additional details if I can.
/cc @evnn
the spec developed and deployed with the iOS SDK: https://gist.github.com/reinpk/7bd33d29694578b06cce (ignore the requestTimestamp on batch flushing since we don't want to correct timestamps coming from a server)
I think it would be a good idea to be able to make my own consumer.
In my current app I need to write and read the logs from Amazon S3, from two different frontend servers.
Update readme status badging to circleci, it is current travis
I'm using the Segment PHP library in a project, but the host we're on has exec()
disabled for security reasons. Is there any particular reason you're using exec()
here instead of PHP's cURL functions?
please provide proper documentation. I'm not sure what to do.
it would cool to have a laravel package as well (perhaps for both 4.2 and 5)
Hi, am I missing something or is this library not namespaced?
Ey there,
it seems that we cannot microtimestamp events when using the library; It does not crash, it just converts into seconds.
After some investigation, I seem to have found where 'timestamp' is turned from microtime(true) to rounded value in seconds:
https://github.com/segmentio/analytics-php/blob/master/lib/Segment/Client.php#L134%20#L135
I believe the iso8601 removes the microtime precision.
It makes sense, in that, when we use the file consumer, the send command would currently crash if it were microseconds:
https://github.com/segmentio/analytics-php/blob/master/send.php#L72
strtotime would not take microseconds I believe.
However, this would really be most helpful to be able to pass microseconds!
https://github.com/segmentio/spec
anonymousId
from sessionId
for clarity.integrations
(was providers
) object from context
, for cleaner logs.messageId
for easily tracing calls through to the raw logs.library
to be a object with name
and version
for consistency.Hello.
I see from the segment debugger, in the "raw" tab, that it arrives with the "writeKey" attribute in the body of the message. This is not so in calls that are made from the javascript client. this is a mistake?
Thank you.
The SDK needs to ensure it doesn’t use up too much memory/storage and should drop messages on the client after a maximum number of messages have been queued. Factors which impact this decision are:
Should any limits be hit, emit an Error log event.
Tasks:
The SDK should provide a clean interface to create an analytics client object in the most idiomatic manner.
The constructor's signature should include the following parameters to help configure the SDK:
Parameter | Type | Description | Default Value | Required |
---|---|---|---|---|
write key | string | Source write key | n/a | Yes |
host | string | Endpoint for tracking API | api.segment.com | No |
Setup our Segment tracking and alias to Analytics for convenience
class_alias('Segment', 'Analytics');
Analytics::init("YOUR_WRITE_KEY");
Why not just include this in your repository?
require_once 'Segment.php';class Analytics extends \Segment {}
"Analytics": "vendor/segmentio/analytics-php/lib/"
To the composer.json “autoload” section will let me do a “use Analytics” on my php file instead of a “require_once …”. I noticed no namespaces were being used in the classes.
Has huge speed improvements
The SDK should have a mechanism to cope with failed data uploads because of intermittent network errors or other issues. The retry strategy should be:
If I call track() with properties set to an empty array, it does not show up on the Debugger, nor on Google Analytics. Calling track() without "properties" => array() works fine of course.
Example:
Segment::track(array(
"userId" => "abc",
"event" => "something",
"properties" => array(),
));
The documentation states that the properties key is optional, but I imagine it shouldn't fail if it is specified without any contents.
We use Amplitude as a Destination for Segment as our primary tool for Data Analytics as a business.
Context:
Issue:
We want to be able to explicitly identify a user as Platform = iOS, or Platform = Android on a Server-side Identify call made from Segment via the PHP Segment Library
How do we go about achieving this?
context.library is added only if context field is not set in message: https://github.com/segmentio/analytics-php/blob/master/lib/Segment/Client.php#L180
Shouldn't the result of $this->getContext()
be merged with the context sent in the message:
if (!isset($msg["context"])) {
$msg["context"] = array();
}
$msg["context"] = array_merge($msg["context"], $this->getContext());
The 'File' consumer appears to ignore timestamps when loading historical events using "track".
Using the Socket consumer and an example timestamp of 1396594800, the below code correctly sends (and Segment receives) an event with a timestamp of "2014-04-04T07:00:01.939Z".
But using the File consumer, the exact same code results in Segment receiving a timestamp of "now".
Analytics::track([
'userId' => 100,
'event' => 'Sent Message',
'timestamp' => 1396594800
]);
Interestingly, the analytics.log file does seem to contain a correct timestamp of "2014-04-04T00:00:00-07:00", but when parsed and sent using send.php, it arrives at Segment as "now".
The socket implementation attempts to write data in chunks, but it never modifies the chunk and always writes the entire buffer. You are probably missing a substr
call in the makeRequest
.
Is there any way to track a google e-commerce transaction using the lib?
Because in the client side you could use
analytics.ready(function () {
ga('ecommerce:addTransaction', {
'id': '1234', // Transaction ID. Required.
'affiliation': 'Acme Clothing', // Affiliation or store name.
'revenue': '11.99', // Grand Total.
'shipping': '5', // Shipping.
'tax': '1.29' // Tax.
});
});
But what about the server side? I can't find anything like that.
Thanks!
Hi guys, just want to ask how come my code doesn't get logged in Google Analytics? We have integrated Mixpanel, Intercom and Active Campaign using this analytics-php library and its working but in Google Analytics it doesn't work. Thanks.
Update testing suite to use CircleCI, and ensure that each commit has the full suite of unit tests and end-to-end tests running.
Tasks:
Hi there,
We've run into some file permissioning issues with the file consumer in PHP.
fopen(/tmp/analytics.log): failed to open stream: Permission denied
The cron is writing the file with one set of permissions while the PHP app is writing with another. Basically one is blowing the other up due to differing permissions.
Shouldn't perhaps the file permissions just be set by both as 777 rather than being user specific?
I'm reluctantly switching back to sockets for now.
Hi there, I'm trying to log some events using the api, but with not good results.
I've set debug mode, and I'm just getting
\Analytics::init( 'mykey' , array("debug" => true,
"error_handler" => function ($code, $msg) { var_dump($code); var_dump($msg);die();}));
And the results is just
string '500' (length=3)
string 'Internal Server Error' (length=21)
Any clue?
Thanks
Hi there,
We are using the php client to send data to segment. In order to make sure that events are sent, we are using a message pattern software (kafka). the way it works is:
event -> message in kafka -> track -> if issue, re-track, otherwise move to next message
however, it seems that some events are still missing. after rapid investigation, part of the problem seems to be that enqueue method [Queue Consumer] is returning true if flush was called. This method is called for each track event. However, it does not return the success state of the flush method! therefore, this return value cannot and should not be used as an indicator to know if the operation succeeded.
It might be better to return directly the success state here.
protected function enqueue($item) {
$count = count($this->queue);
if ($count > $this->max_queue_size)
return false;
$count = array_push($this->queue, $item);
if ($count > $this->batch_size)
$this->flush();
return true;
}
might become
protected function enqueue($item) {
$count = count($this->queue);
if ($count > $this->max_queue_size)
return false;
$count = array_push($this->queue, $item);
if ($count > $this->batch_size)
return $this->flush();
}
or give some other way to know which of the batch items could not be sent when using track.
Best,
I'm getting the following curl error when trying to send events using the SDK:
"SSL certificate problem: self signed certificate in certificate chain"
Hello,
I'm using this library with Magento2 module and we have seen some events missing in the log.
There is no lock mechanism to prevent the analytics.logs file been at the same time written with a new event and been renamed by the send.php cron.
I have reproduced the case adding some delay between the initialization with the fopen in the construct of File.php and the function to write in the file; launching the cron during this delay. Cron renamed the file and the write function didn't recreate the file as there is no more fopen.
First solution could be the add a fopen in the File.php write function in case of the file is missing. It will recreate the log for the new event. And reduce the chance to have concurrent action.
Second solution could be to write in a file named with a timestamp and to use send.php on file at least one minute old
I don't know if it's of any use to you guys, but I wrote a wrapper package for Laravel 4, with L5 support coming soon.
Implement two configurable parameters:
These parameters should be available when instantiating the analytics client, and they should have separate getters and setters.
Tasks:
I first do a identify
Segment::identify([
"userId" => "12345abcde",
"traits" => [
"name" => "James Brooks",
"email" => "[email protected]",
]
]);
followed immediately by a track
Segment::track([
"userId" => "12345abcde",
"event" => "Signed Up",
"properties" => [
"was_awesome" => true,
]
]);
The track()
ed data is lost even when flush()
'ed
It was working perfectly in the local environment because we are using queue:listen
command but on production, it's not working with queue:work
command
And I did test locally with both commands same on local i.e. works with queue:listen
but not with queue:work
no error, nothing
Hi,
Thanks for releasing a php client library for segment.io. Would it possible to include a composer.json (http://getcomposer.org/) file at the root of the project for easy dependency management?
Cheers,
Andrew
https://segment.com/docs/sources/server/http/#batch
Batch
The batch method lets you send a series of identify, group, track, page and screen requests in a single batch, saving on outbound requests. Our server-side and mobile sources make use of this method automatically for higher performance.There is a maximum of 500kb per batch request and 32kb per call.
Here’s the what the batch request signature looks like:
POST https://api.segment.io/v1/batch
you can reproduce in the simulator :)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.